CN114202778A - Method and system for estimating three-dimensional gesture of finger by planar fingerprint - Google Patents

Method and system for estimating three-dimensional gesture of finger by planar fingerprint Download PDF

Info

Publication number
CN114202778A
CN114202778A CN202111301866.2A CN202111301866A CN114202778A CN 114202778 A CN114202778 A CN 114202778A CN 202111301866 A CN202111301866 A CN 202111301866A CN 114202778 A CN114202778 A CN 114202778A
Authority
CN
China
Prior art keywords
plane
fingerprint
fingerprint image
training
planar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111301866.2A
Other languages
Chinese (zh)
Inventor
冯建江
周杰
贺珂
殷其昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202111301866.2A priority Critical patent/CN114202778A/en
Publication of CN114202778A publication Critical patent/CN114202778A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a system for estimating the three-dimensional gesture of a finger by a planar fingerprint, which belong to the technical field of human-computer interaction, wherein the method for estimating the three-dimensional gesture of the finger by the planar fingerprint acquires a planar fingerprint image of an object to be detected; determining a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model; and determining the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture. By adopting the scheme, the complete finger three-dimensional posture matched with the plane fingerprint posture of the object to be detected is determined by pre-training an accurate plane posture estimation model and on the basis of the accurate plane posture estimation model through a parameter learning or statistical modeling method, and the technical problem that the existing finger three-dimensional posture estimation technology of fingerprints is not ideal in the aspects of function and convenience is solved.

Description

Method and system for estimating three-dimensional gesture of finger by planar fingerprint
Technical Field
The application relates to the technical field of human-computer interaction, in particular to a method and a system for estimating a three-dimensional gesture of a finger by a planar fingerprint.
Background
The field of finger three-dimensional pose estimation of fingerprints divides the prior art into several types according to the modality of the input fingerprint (acquisition technique):
1) the gesture estimation based on the capacitive sensing fingerprint, the capacitive sensing fingerprint is formed by utilizing the difference of the capacitance of a sensor under the condition that the fingerprint contacts a screen and is not in contact with the screen, as the principle is relatively simple and is widely applied to touch screen equipment, (Xiao et al.) 42 characteristics are extracted from the capacitive sensing fingerprint, and a Gaussian regression model is trained to estimate the pitch angle and the deflection angle, and good effect is achieved on a smart phone and a smart watch, but the rolling angles cannot be predicted by the capacitive sensing fingerprint, so that the obtained fingerprint gesture is incomplete, and because the resolution of the sensor is low, the ridge line and the valley line of the fingerprint cannot be distinguished, and the accuracy of the gesture estimation is limited;
2) attitude estimation integrating other modal information, acquiring depth information of a fingerprint by using an externally mounted depth camera, integrating priori knowledge to further constrain the range of a pitch angle to be 0-90 degrees, binding a camera on a fingertip of an experimental object, and calculating the pitch angle and a deflection angle by detecting the illumination intensity change of a nail cover, wherein the scheme utilizes modal information except a plane fingerprint to further acquire a complete three-dimensional attitude of the fingerprint, but needs to introduce additional hardware to bring obstruction to an actual application scene;
3) the gesture estimation based on the plane pressing down fingerprint, so far, only one scheme is to estimate the three-dimensional gesture of the finger based on the plane pressing down fingerprint, and Holz and Baudisch proposes to register each angle plane fingerprint image of a specific finger and the corresponding real angle in a database, and to guess the three-dimensional gesture of the current input fingerprint including the pitch angle, the roll angle and the deflection angle by searching the library fingerprint most similar to the input fingerprint in the test stage.
In summary, the existing technical scheme for estimating the three-dimensional gesture of the fingerprint cannot accurately estimate the complete gesture of the fingerprint, needs additional sensor equipment for assistance, or needs a user to perform a complicated fingerprint registration link, and is not ideal in terms of function and convenience. Therefore, a technical scheme for directly estimating the three-dimensional gesture of the fingerprint based on the planar fingerprint image is needed, and great promotion is brought to the interactive application of the fingerprint.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present application is to provide a method for estimating a three-dimensional gesture of a finger from a planar fingerprint, so as to solve the technical problem that the current finger three-dimensional gesture estimation technology of fingerprints is not ideal in terms of both function and convenience.
A second object of the present application is to propose a system for estimating the three-dimensional pose of a finger from a planar fingerprint.
In order to achieve the above object, an embodiment of the present application in a first aspect proposes a method for estimating a three-dimensional gesture of a finger from a planar fingerprint, including:
collecting a planar fingerprint image of an object to be detected;
determining a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model;
and determining the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture.
Optionally, in an embodiment of the present application, the planar fingerprint pose includes a position and a deflection angle of a planar fingerprint image;
the determining of the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode comprises the following steps:
determining a three-dimensional attitude mapping function corresponding to the plane fingerprint image according to the parameter learning or statistical modeling mode;
determining a rolling angle and a pitching angle of the plane fingerprint image according to the three-dimensional attitude mapping function and the position of the plane fingerprint image;
and determining a complete three-dimensional posture matched with the object to be detected according to the deflection angle, the rolling angle and the pitch angle of the plane fingerprint image.
Optionally, in an embodiment of the present application, the determining, according to the parameter learning or statistical modeling, a three-dimensional pose mapping function corresponding to the planar fingerprint image includes:
constructing a first plane fingerprint image database in a machine learning mode, wherein the first plane fingerprint image database comprises feature descriptors;
determining a plurality of planar fingerprint images matched with the feature descriptors from a first planar fingerprint image database, and determining a first mapping parameter corresponding to each of the plurality of planar fingerprint images;
fusing the plurality of first mapping parameters to obtain a second mapping parameter, and determining a three-dimensional attitude mapping function corresponding to the plane fingerprint image according to the second mapping parameter, wherein the second mapping parameter is obtained by calculating an average value or a weighted average value of the plurality of first mapping parameters, or taking an average value of a preset plurality of most matched first mapping parameters;
or determining at least one probability model for describing the finger through the simulation system and the actually acquired data set, and determining the three-dimensional posture mapping function corresponding to the current plane fingerprint image according to the at least one probability model for describing the finger.
Optionally, in an embodiment of the present application, before determining a planar fingerprint pose corresponding to the planar fingerprint image by using a pre-trained planar pose estimation model, the method further includes:
acquiring a training plane fingerprint image and training three-dimensional angle data corresponding to the training plane fingerprint image;
and training a plane attitude estimation model according to the training plane fingerprint image and the training three-dimensional angle data to obtain the pre-trained plane attitude estimation model.
Optionally, in an embodiment of the present application, the acquiring a training planar fingerprint image and training three-dimensional angle data corresponding to the training planar fingerprint image includes:
acquiring the plane fingerprint image for training through a plane fingerprint acquisition instrument;
acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using two three-axis gyroscopes, or acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using an optical tracking and measuring system, or acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using the simulation system and a data generator; wherein the content of the first and second substances,
the three-dimensional angle data for training comprises a pitch angle true value, a deflection angle true value and a roll angle true value, wherein the range of the pitch angle true value is not more than 20 degrees and not less than-80 degrees, the range of the deflection angle true value is not more than 90 degrees and not less than-90 degrees, and the range of the roll angle true value is not more than 75 degrees and not less than-75 degrees.
Optionally, in an embodiment of the present application, after the acquiring the training planar fingerprint image and the training three-dimensional angle data corresponding to the training planar fingerprint image, the method further includes:
storing the training planar fingerprint image into a second planar fingerprint image database;
acquiring a plurality of training plane fingerprint images, a feature descriptor and a first mapping parameter of each training plane fingerprint image from the second plane fingerprint image database according to preset conditions;
and respectively storing the plurality of training plane fingerprint images, the first mapping parameter and the feature descriptors of the plurality of training plane fingerprint images into the first plane fingerprint image database.
Optionally, in an embodiment of the present application, the training the planar pose estimation model according to the training planar fingerprint image and the training three-dimensional angle data includes:
and acquiring a first mapping parameter of each training plane fingerprint image in the first plane fingerprint image database by using a function fitting mode, wherein the first mapping parameter comprises a pitch angle mapping parameter and a roll angle mapping parameter.
Alternatively, in one embodiment of the present application,
determining a pitch angle mapping parameter for each training planar fingerprint image in the first database of planar fingerprint images by:
fpitch(x,y)=b1x2+b2y2+b3x+b4y+b5ln(x)+b6ln(y)+b7
wherein (x, y) is the position of the fingerprint, fpitch(x, y) is the true value of the pitch angle of the fingerprint,
Figure BDA0003338722220000041
mapping parameters of a pitch angle corresponding to the plane fingerprint image;
determining roll angle mapping parameters for each training flat fingerprint image in the first database of flat fingerprint images by:
froll(x,y)=a1x2+a2y2+a3x+a4y+a5ln(x)+a6ln(y)+a7
wherein (x, y) is the position of the fingerprint, froll(x, y) is the true value of the roll angle of the fingerprint,
Figure BDA0003338722220000051
and mapping parameters for the corresponding rolling angles of the plane fingerprint images.
Optionally, in an embodiment of the present application, the planar pose estimation model is trained by:
Lall=Lpos+λLyaw
Figure BDA0003338722220000052
Lyaw=‖θ-θ′‖2
wherein L isallRepresents the value of the loss function, LposValue of loss function, L, representing position predictionyawA loss function value representing the yaw angle, lambda being a weighting coefficient,
Figure BDA0003338722220000053
a true value representing the position of the fingerprint,
Figure BDA0003338722220000054
representing the predicted value of the planar pose estimation model for the location of the fingerprint,
Figure BDA0003338722220000055
representing the true value of the deflection angle of the fingerprint,
Figure BDA0003338722220000056
and representing the predicted value of the plane attitude estimation model to the deflection angle of the fingerprint.
In summary, the method provided in the embodiment of the first aspect of the present application collects a planar fingerprint image of an object to be measured; determining a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model; and determining the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture. According to the method and the device, an accurate plane posture estimation model is trained in advance, and the complete finger three-dimensional posture matched with the plane fingerprint posture of the object to be detected is determined in a mapping or fitting mode based on the accurate plane posture estimation model, so that the technical problem that the existing finger three-dimensional posture estimation technology for fingerprints is not ideal in function and convenience is solved.
In order to achieve the above object, a system for estimating a three-dimensional pose of a finger from a planar fingerprint according to an embodiment of the second aspect of the present application includes:
the acquisition module is used for acquiring a planar fingerprint image of an object to be detected;
the plane attitude estimation module is used for determining a plane fingerprint attitude corresponding to the plane fingerprint image by utilizing a pre-trained plane attitude estimation model;
and the determining module is used for determining the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture.
In summary, in the system provided in the embodiment of the second aspect of the present application, a planar fingerprint image of an object to be measured is acquired by an acquisition module; the plane attitude estimation module determines a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model; and the determining module determines the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture. According to the method and the device, an accurate plane posture estimation model is trained in advance, and the complete finger three-dimensional posture matched with the plane fingerprint posture of the object to be detected is determined in a mapping or fitting mode based on the accurate plane posture estimation model, so that the technical problem that the existing finger three-dimensional posture estimation technology for fingerprints is not ideal in function and convenience is solved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of three gesture angles of a finger while pressing down on a screen according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application;
FIG. 3 is a schematic diagram of data acquisition provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a finger three-dimensional gesture fit provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a deep neural network according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a system for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. On the contrary, the embodiments of the application include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
In the field of human-computer interaction, touch screen input is favored by a large number of intelligent mobile devices as a simple and fast interaction mode. Not only the development of chip manufacturing and mobile phone system technologies, but also the iterative update of the interactive mode of touch screen is not required. Conventional smart devices may sense the area pressed by the user's finger and give corresponding feedback. For example, in the case of performing operations such as enlarging, rotating, zooming, etc., a user needs to use at least two fingers to smoothly complete the operations. If the angle of the finger when contacting the touch screen can be accurately estimated to serve as extra input information, user experience is greatly improved, the logic of partial operation is simplified, the application scene of fingerprint input can be widened, more interesting interaction forms are brought, and if the three-dimensional posture can be directly estimated through the plane fingerprint, extra hardware updating burden can not be brought to intelligent devices such as mobile phones.
Three posture angles when a finger presses a screen are shown in figure 1, and based on the three posture angles of a rolling angle, a pitch angle and a deflection angle (a deviation angle), a user can be required to incline towards a certain specific posture, whether the current fingerprint is a forged fingerprint is judged by judging whether the action fed back by the user accords with a certain angle threshold, and on the basis, the intelligent equipment system developer can enhance the safety and reliability of the fingerprint identification system; meanwhile, three gesture angles of the fingers can be additionally provided on the basis of the existing interactive information, more combined gestures are created, the input of touch screen information is enriched, the interactive interest is increased, and the difference between the touch screen device and the traditional physical typing type device is made up under the advantage of simplicity and easiness in use.
Example 1
Fig. 2 is a flowchart of a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application.
As shown in fig. 2, a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application includes the following steps:
step 210, collecting a planar fingerprint image of an object to be detected;
220, determining a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model;
and step 230, determining a complete finger three-dimensional gesture matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture.
In the embodiment of the application, the plane fingerprint gesture comprises the position and the deflection angle of a plane fingerprint image;
determining a complete finger three-dimensional gesture matched with an object to be detected in a parameter learning or statistical modeling mode, wherein the method comprises the following steps:
determining a three-dimensional attitude mapping function corresponding to the plane fingerprint image according to a parameter learning or statistical modeling mode;
determining a rolling angle and a pitching angle of the plane fingerprint image according to the three-dimensional attitude mapping function and the position of the plane fingerprint image;
and determining a complete three-dimensional attitude matched with the object to be detected according to the deflection angle, the rolling angle and the pitch angle of the plane fingerprint image.
In the embodiment of the present application, determining a three-dimensional pose mapping function corresponding to a planar fingerprint image according to a parameter learning or statistical modeling method includes:
constructing a first plane fingerprint image database in a machine learning mode, wherein the first plane fingerprint image database comprises a feature descriptor;
determining a plurality of planar fingerprint images matched with the feature descriptors from a first planar fingerprint image database, and determining a first mapping parameter corresponding to each of the plurality of planar fingerprint images;
fusing the plurality of first mapping parameters to obtain second mapping parameters, and determining a three-dimensional attitude mapping function corresponding to the plane fingerprint image according to the second mapping parameters, wherein the second mapping parameters are obtained by calculating the average value or weighted average value of the plurality of first mapping parameters or taking the average value of a preset plurality of most matched first mapping parameters;
or determining at least one probability model for describing the finger through the simulation system and the actually acquired data set, and determining the three-dimensional posture mapping function corresponding to the current plane fingerprint image according to the at least one probability model for describing the finger.
Specifically, when a three-dimensional posture mapping function corresponding to a plane fingerprint image is determined according to a statistical modeling mode, at least one probability model for describing a finger is determined through a simulation system and an actually acquired data set, a plurality of possible space transformations are applied to the probability model for describing the finger, the position and the plane fingerprint posture of the plane fingerprint image obtained through projection are compared, the space transformation corresponding to the closest probability model for describing the finger is selected, and the three-dimensional posture mapping function corresponding to the current plane fingerprint image is calculated according to the closest probability model for describing the finger.
In this embodiment of the present application, before determining a planar fingerprint pose corresponding to a planar fingerprint image by using a pre-trained planar pose estimation model, the method further includes:
acquiring a training plane fingerprint image and training three-dimensional angle data corresponding to the training plane fingerprint image;
and training the plane attitude estimation model according to the training plane fingerprint image and the training three-dimensional angle data to obtain a pre-trained plane attitude estimation model.
In the embodiment of the present application, acquiring a planar fingerprint image for training and three-dimensional angle data for training corresponding to the planar fingerprint image for training includes:
acquiring a plane fingerprint image for training through a plane fingerprint acquisition instrument;
acquiring training three-dimensional angle data corresponding to a training plane fingerprint image by using two three-axis gyroscopes, or acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using an optical tracking and measuring system, or acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using a simulation system and a data generator; wherein the content of the first and second substances,
the three-dimensional angle data for training comprises a pitch angle true value, a yaw angle true value and a roll angle true value, wherein the range of the pitch angle true value is not more than 20 degrees and not less than-80 degrees, the range of the yaw angle true value is not more than 90 degrees and not less than-90 degrees, and the range of the roll angle true value is not more than 75 degrees and not less than-75 degrees.
Specifically, the accuracy of the plane attitude estimation model depends on the richness of the collected samples, so that data collection is a crucial part of the present application, and the following three data collection schemes are specifically adopted:
the first data acquisition scheme is that a planar fingerprint image is acquired by a planar fingerprint acquisition instrument, three-axis gyroscopes are used for reading three-dimensional angle data, the data acquisition is shown in figure 3, when the data are acquired, the two three-axis gyroscopes are respectively fixed on a fingerprint sensor and a finger to be acquired, an acquiring person rolls the finger bound with the gyroscopes from one side to the other side on the fingerprint sensor, the fingerprint acquisition instrument is controlled by a program to acquire a plurality of planar fingerprint images at the frequency of 50Hz, the three-axis gyroscopes synchronously read out the angles, and the three-dimensional angle data corresponding to the planar fingerprint images are obtained through the difference value of the numerical values of the two three-axis gyroscopes;
the second data acquisition scheme utilizes an optical tracking measurement system to combine with a fingerprint acquisition instrument to synchronously acquire three-dimensional angle data and a corresponding plane fingerprint image.
And in the third data acquisition scheme, a three-dimensional fingerprint database is constructed in advance, and a plane fingerprint image under any three-dimensional angle data is synthesized through the three-dimensional fingerprint database.
Further, in order to determine that the number of samples is sufficiently rich, the collected fingerprint sequence is not less than 500, the collected planar fingerprint images are not less than 40000, and three-dimensional angle data corresponding to each planar fingerprint image is recorded.
Specifically, data acquisition only needs to be performed once, and the method for estimating the three-dimensional posture of the finger by the planar fingerprint provided by the embodiment can be applied to any finger of any person, so that the method for estimating the three-dimensional posture of the finger by the planar fingerprint does not need to perform finger registration, which is essentially different from the method proposed by (Holz and baudich).
In the embodiment of the present application, after acquiring the planar fingerprint image for training and the three-dimensional angle data for training corresponding to the planar fingerprint image for training, the method further includes:
storing the training plane fingerprint image into a second plane fingerprint image database;
acquiring a plurality of training plane fingerprint images, a feature descriptor of each training plane fingerprint image and a first mapping parameter from a second plane fingerprint image database according to preset conditions;
and respectively storing the plurality of training plane fingerprint images, the first mapping parameters and the feature descriptors of the plurality of training plane fingerprint images into a first plane fingerprint image database.
In the embodiment of the present application, training a plane posture estimation model according to a plane fingerprint image for training and three-dimensional angle data for training includes:
and acquiring a first mapping parameter of each training plane fingerprint image in the first plane fingerprint image database by using a function fitting mode, wherein the first mapping parameter comprises a pitch angle mapping parameter and a rolling angle mapping parameter.
Specifically, the fitting of the three-dimensional gesture of the finger is shown in fig. 4, wherein the contact-type fingerprint image is obtained by a fingerprint registration algorithm, i.e., a planar gesture estimation model, to obtain the relative position of the contact-type fingerprint image in the rolling fingerprint, and then the contact-type fingerprint image is fitted by a second planar fingerprint image database, i.e., a plurality of fingerprint images with known gestures in a training database, to obtain the three-dimensional curved surface of the finger, and finally the three-dimensional angle of the finger is obtained.
In the embodiment of the present application, the pitch angle mapping parameter of each training planar fingerprint image in the first planar fingerprint image database is determined by the following formula:
fpitch(x,y)=b1x2+b2y2+b3x+b4y+b5ln(x)+b6ln(y)+b7
wherein (x, y) is the position of the fingerprint, fpitch(x, y) is the true value of the pitch angle of the fingerprint,
Figure BDA0003338722220000111
mapping parameters of a pitch angle corresponding to the plane fingerprint image;
determining roll angle mapping parameters for each of the training planar fingerprint images in the first database of planar fingerprint images by:
froll(x,y)=a1x2+a2y2+a3x+a4y+a5ln(x)+a6ln(y)+a7
wherein (x, y) is the position of the fingerprint, froll(x, y) is the true value of the roll angle of the fingerprint,
Figure BDA0003338722220000112
and mapping parameters for the corresponding rolling angles of the plane fingerprint images.
Specifically, the three-dimensional angle data of the finger directly influences the horizontal position and the vertical position of the plane fingerprint image, namely the mapping relation from the plane fingerprint position to the three-dimensional gesture of the finger can be inferred by utilizing the constructed first plane fingerprint image database, different finger surface modeling influences the specific expression form of the mapping function, and the overall thought and the implementation flow are the same.
In an embodiment of the present application, the method further comprises training the planar pose estimation model by:
Lall=Lpos+λLyaw
Figure BDA0003338722220000113
Lyaw=‖θ-θ′‖2
wherein L isallRepresents the value of the loss function, LposValue of loss function, L, representing position predictionyawA loss function value representing the yaw angle, lambda being a weighting coefficient,
Figure BDA0003338722220000121
a true value representing the position of the fingerprint,
Figure BDA0003338722220000122
representing the predicted value of the planar pose estimation model for the location of the fingerprint,
Figure BDA0003338722220000123
representing the true value of the deflection angle of the fingerprint,
Figure BDA0003338722220000124
and representing the predicted value of the plane attitude estimation model to the deflection angle of the fingerprint.
Specifically, the planar pose estimation model is a deep neural network, the structure of the deep neural network is shown in fig. 5, and in order to fully utilize information in the planar fingerprint image and improve the accuracy of pose prediction, the deep neural network is divided into the following three modules according to functions:
a feature extraction backbone network, which is based on the backbone network framework with the strongest expression ability (Yin et al), and is modified a little, and meanwhile, the multi-scale fusion technique is fully utilized, and finally, a feature descriptor with fixed dimensionality is output;
the attention mechanism module combines and modifies the latest attention mechanism module in the Yin et al, takes the output of the module as a mask, and enhances the foreground area in the output of the feature extraction module, so that the deep neural network can pay more attention to the foreground area of the fingerprint image and ignore invalid background information;
and the three-dimensional angle prediction module is used for outputting a predicted value of the position of the fingerprint by the deep neural network and a predicted value of a deflection angle, namely a deviation angle, of the fingerprint by the deep neural network.
Furthermore, in order to improve the precision and the generalization capability of the deep neural network, a representative plane fingerprint image is selected from the first plane fingerprint image database and is placed into the second plane fingerprint image database, not less than 400 fingerprint sequences are selected from the second plane fingerprint image database, not less than 25000 plane fingerprint images are used for training the deep neural network, the data volume for training is not less than 70% of the data volume in the second plane fingerprint image database, and not less than 4000 fingerprint images are selected as a verification set to adjust the hyper-parameters of the deep neural network training, so that the generalization capability and the robustness of the deep neural network are further improved.
Specifically, fig. 6 is a schematic flowchart of a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application, where the method includes the following four steps:
step 210, collecting fingerprints, and putting the sampled data into a first plane fingerprint image database;
220, selecting representative sampling data from the first plane fingerprint image database, putting the representative sampling data into a second plane fingerprint image database, training the deep neural network by using the sampling data in the second plane fingerprint image database, and training the deep neural network by using a loss function;
step 230, utilizing a function to fit the position information of the plane fingerprint image obtained through the depth neural network and the rolling angle and pitch angle data obtained through sampling to obtain a fitting parameter from the plane fingerprint image to the three-dimensional gesture of the finger, and storing the fitting parameter into a first plane fingerprint image database;
and 240, in the testing stage, a depth descriptor, a deviation angle and position data corresponding to the tested image are obtained by using the depth neural network, a mapping parameter corresponding to the tested image is obtained by performing close search based on the depth descriptor, and rolling angle and pitch angle data corresponding to the tested image are obtained according to the mapping parameter and the position data.
In summary, the method provided by the embodiment of the present application collects a planar fingerprint image of an object to be measured; determining a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model; and determining the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture. According to the method and the device, an accurate plane posture estimation model is trained in advance, and the complete finger three-dimensional posture matched with the plane fingerprint posture of the object to be detected is determined in a mapping or fitting mode based on the accurate plane posture estimation model, so that the technical problem that the existing finger three-dimensional posture estimation technology for fingerprints is not ideal in function and convenience is solved.
In order to implement the above embodiments, the present application further provides a system for estimating a three-dimensional gesture of a finger from a planar fingerprint.
Fig. 7 is a schematic structural diagram of a system for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application.
As shown in fig. 7, a system for estimating a three-dimensional pose of a finger from a planar fingerprint comprises:
the acquisition module 710 is used for acquiring a planar fingerprint image of an object to be detected;
a plane pose estimation module 720, configured to determine a plane fingerprint pose corresponding to the plane fingerprint image by using a pre-trained plane pose estimation model;
the determining module 740 is configured to determine, according to the planar fingerprint gesture, a complete three-dimensional gesture of the finger matched with the object to be detected through parameter learning or statistical modeling.
In summary, the system provided in the embodiment of the present application acquires a planar fingerprint image of an object to be detected through an acquisition module; the plane attitude estimation module determines a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model; the determining module determines a complete finger three-dimensional gesture matched with the object to be detected through parameter learning or a statistical modeling mode according to the planar fingerprint gesture. According to the method and the device, an accurate plane posture estimation model is trained in advance, and the complete finger three-dimensional posture matched with the plane fingerprint posture of the object to be detected is determined in a mapping or fitting mode based on the accurate plane posture estimation model, so that the technical problem that the existing finger three-dimensional posture estimation technology for fingerprints is not ideal in function and convenience is solved.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method of estimating a three-dimensional pose of a finger from a planar fingerprint, the method comprising:
collecting a planar fingerprint image of an object to be detected;
determining a plane fingerprint attitude corresponding to the plane fingerprint image by using a pre-trained plane attitude estimation model;
and determining the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode according to the planar fingerprint gesture.
2. The method of claim 1, wherein the planar fingerprint pose comprises a position and a deflection angle of a planar fingerprint image;
the determining of the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode comprises the following steps:
determining a three-dimensional attitude mapping function corresponding to the plane fingerprint image according to the parameter learning or statistical modeling mode;
determining a rolling angle and a pitching angle of the plane fingerprint image according to the three-dimensional attitude mapping function and the position of the plane fingerprint image;
and determining a complete three-dimensional posture matched with the object to be detected according to the deflection angle, the rolling angle and the pitch angle of the plane fingerprint image.
3. The method of claim 2, wherein determining the three-dimensional pose mapping function corresponding to the planar fingerprint image based on the parameter learning or statistical modeling comprises:
constructing a first plane fingerprint image database in a machine learning mode, wherein the first plane fingerprint image database comprises feature descriptors;
determining a plurality of planar fingerprint images matched with the feature descriptors from a first planar fingerprint image database, and determining a first mapping parameter corresponding to each of the plurality of planar fingerprint images;
fusing the plurality of first mapping parameters to obtain a second mapping parameter, and determining a three-dimensional attitude mapping function corresponding to the plane fingerprint image according to the second mapping parameter, wherein the second mapping parameter is obtained by calculating an average value or a weighted average value of the plurality of first mapping parameters, or taking an average value of a preset plurality of most matched first mapping parameters;
or determining at least one probability model for describing the finger through a simulation system and an actually acquired data set, and determining a three-dimensional posture mapping function corresponding to the plane fingerprint image according to the at least one probability model for describing the finger.
4. The method of claim 1, prior to determining a planar fingerprint pose corresponding to the planar fingerprint image using a pre-trained planar pose estimation model, further comprising:
acquiring a training plane fingerprint image and training three-dimensional angle data corresponding to the training plane fingerprint image;
and training a plane attitude estimation model according to the training plane fingerprint image and the training three-dimensional angle data to obtain the pre-trained plane attitude estimation model.
5. The method of claim 4, wherein the acquiring of the training flat fingerprint image and the training three-dimensional angle data corresponding to the training flat fingerprint image comprises:
acquiring the plane fingerprint image for training through a plane fingerprint acquisition instrument;
acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using two three-axis gyroscopes, or acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using an optical tracking and measuring system, or acquiring training three-dimensional angle data corresponding to the training plane fingerprint image by using the simulation system and a data generator; wherein the content of the first and second substances,
the three-dimensional angle data for training comprises a pitch angle true value, a deflection angle true value and a roll angle true value, wherein the range of the pitch angle true value is not more than 20 degrees and not less than-80 degrees, the range of the deflection angle true value is not more than 90 degrees and not less than-90 degrees, and the range of the roll angle true value is not more than 75 degrees and not less than-75 degrees.
6. The method according to claim 4, further comprising, after the acquiring of the training flat fingerprint image and the training three-dimensional angle data corresponding to the training flat fingerprint image:
storing the training planar fingerprint image into a second planar fingerprint image database;
acquiring a plurality of training plane fingerprint images, a feature descriptor and a first mapping parameter of each training plane fingerprint image from the second plane fingerprint image database according to preset conditions;
and respectively storing the plurality of training plane fingerprint images, the first mapping parameter and the feature descriptors of the plurality of training plane fingerprint images into the first plane fingerprint image database.
7. The method of claim 6, wherein the training a planar pose estimation model from the training planar fingerprint image and the training three-dimensional angular data comprises:
and acquiring a first mapping parameter of each training plane fingerprint image in the first plane fingerprint image database by using a function fitting mode, wherein the first mapping parameter comprises a pitch angle mapping parameter and a roll angle mapping parameter.
8. The method of claim 7,
determining a pitch angle mapping parameter for each training planar fingerprint image in the first database of planar fingerprint images by:
fpitch(x,y)=b1x2+b2y2+b3x+b4y+b5ln(x)+b6ln(y)+b7
wherein (x, y) is the position of the fingerprint, fpitch(x, y) is the true value of the pitch angle of the fingerprint,
Figure FDA0003338722210000031
mapping parameters of a pitch angle corresponding to the plane fingerprint image;
determining roll angle mapping parameters for each training flat fingerprint image in the first database of flat fingerprint images by:
froll(x,y)=a1x2+a2y2+a3x+a4y+a5ln(x)+a6ln(y)+a7
wherein (x, y) is the position of the fingerprint, froll(x, y) is the true value of the roll angle of the fingerprint,
Figure FDA0003338722210000032
and mapping parameters for the corresponding rolling angles of the plane fingerprint images.
9. The method of claim 4, wherein the planar pose estimation model is trained by:
Lall=Lpos+λLyaw
Figure FDA0003338722210000033
Lyaw=‖θ-θ′‖2
wherein L isallRepresents the value of the loss function, LposValue of loss function, L, representing position predictionyawA loss function value representing the yaw angle, lambda being a weighting coefficient,
Figure FDA0003338722210000034
a true value representing the position of the fingerprint,
Figure FDA0003338722210000035
representing the predicted value of the planar pose estimation model for the location of the fingerprint,
Figure FDA0003338722210000036
representing the true value of the deflection angle of the fingerprint,
Figure FDA0003338722210000037
and representing the predicted value of the plane attitude estimation model to the deflection angle of the fingerprint.
10. A system for estimating the three-dimensional pose of a finger from a planar fingerprint, the system comprising:
the acquisition module is used for acquiring a planar fingerprint image of an object to be detected;
the plane attitude estimation module is used for determining a plane fingerprint attitude corresponding to the plane fingerprint image by utilizing a pre-trained plane attitude estimation model;
and the determining module is used for determining the complete three-dimensional gesture of the finger matched with the object to be detected in a parameter learning or statistical modeling mode.
CN202111301866.2A 2021-11-04 2021-11-04 Method and system for estimating three-dimensional gesture of finger by planar fingerprint Pending CN114202778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111301866.2A CN114202778A (en) 2021-11-04 2021-11-04 Method and system for estimating three-dimensional gesture of finger by planar fingerprint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111301866.2A CN114202778A (en) 2021-11-04 2021-11-04 Method and system for estimating three-dimensional gesture of finger by planar fingerprint

Publications (1)

Publication Number Publication Date
CN114202778A true CN114202778A (en) 2022-03-18

Family

ID=80646831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111301866.2A Pending CN114202778A (en) 2021-11-04 2021-11-04 Method and system for estimating three-dimensional gesture of finger by planar fingerprint

Country Status (1)

Country Link
CN (1) CN114202778A (en)

Similar Documents

Publication Publication Date Title
CN107784282B (en) Object attribute identification method, device and system
US20180140917A1 (en) Method and device for recognizing movement of tennis racket
CN103198292A (en) Face feature vector construction
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
CN112148128B (en) Real-time gesture recognition method and device and man-machine interaction system
CN113569638A (en) Method and device for estimating three-dimensional gesture of finger by planar fingerprint
CN113516113A (en) Image content identification method, device, equipment and storage medium
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
KR20180129041A (en) Fingerprint authentication method and apparatus
CN116958584A (en) Key point detection method, regression model training method and device and electronic equipment
CN111061394B (en) Touch force identification method, training method and device of model thereof and electronic system
CN114202778A (en) Method and system for estimating three-dimensional gesture of finger by planar fingerprint
CN105518581A (en) Translation and scale invariant features for gesture recognition
CN114694257A (en) Multi-user real-time three-dimensional action recognition and evaluation method, device, equipment and medium
CN111797656B (en) Face key point detection method and device, storage medium and electronic equipment
CN114360047A (en) Hand-lifting gesture recognition method and device, electronic equipment and storage medium
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN113449652A (en) Positioning method and device based on biological feature recognition
CN110956130A (en) Method and system for four-level face detection and key point regression
CN111796992B (en) Behavior preference determination method and apparatus, storage medium and electronic device
CN111797866A (en) Feature extraction method and device, storage medium and electronic equipment
CN116243803B (en) Action evaluation method, system, equipment and readable storage medium based on VR technology
CN111797075A (en) Data recovery method and device, storage medium and electronic equipment
US20230125410A1 (en) Information processing apparatus, image capturing system, method, and non-transitory computer-readable storage medium
CN117707746B (en) Method and system for scheduling interactive holographic data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination