CN110807423B - Method and device for processing fingerprint image under screen and electronic equipment - Google Patents

Method and device for processing fingerprint image under screen and electronic equipment Download PDF

Info

Publication number
CN110807423B
CN110807423B CN201911058968.9A CN201911058968A CN110807423B CN 110807423 B CN110807423 B CN 110807423B CN 201911058968 A CN201911058968 A CN 201911058968A CN 110807423 B CN110807423 B CN 110807423B
Authority
CN
China
Prior art keywords
screen
geometric mapping
mapping relation
fingerprint
fingerprint image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911058968.9A
Other languages
Chinese (zh)
Other versions
CN110807423A (en
Inventor
谢锋明
王家伟
邢源
高炼
刘宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIANJIN JIHAO TECHNOLOGY CO LTD
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201911058968.9A priority Critical patent/CN110807423B/en
Publication of CN110807423A publication Critical patent/CN110807423A/en
Application granted granted Critical
Publication of CN110807423B publication Critical patent/CN110807423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction

Abstract

The invention provides a method and a device for processing an underscreen fingerprint image and electronic equipment, wherein the method comprises the following steps: acquiring a screen fingerprint image and an on-screen fingerprint image corresponding to the screen fingerprint image; determining a first geometric mapping relation between any two on-screen fingerprint images; determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; and determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation. The method can determine the target geometric mapping relation between any two under-screen fingerprint images, and further determine whether the two under-screen fingerprint images are images of the same fingerprint area.

Description

Method and device for processing fingerprint image under screen and electronic equipment
Technical Field
The invention relates to the technical field of fingerprints, in particular to a method and a device for processing an under-screen fingerprint image and electronic equipment.
Background
The core problem in the field of fingerprinting is whether two images originate from the same piece of fingerprint. In the field of the underscreen fingerprint, when determining whether two underscreen fingerprint images are in the same fingerprint area, the geometric mapping relationship between the two underscreen fingerprint images needs to be determined first, and then whether the two fingerprint images are in the same fingerprint area is determined based on the geometric mapping relationship between the two fingerprint images. However, because the imaging focal length of the under-screen fingerprint acquisition module is very close and the field of view is small, the acquired fingerprint area is very small, and the finally obtained image area of the under-screen fingerprint image is very limited.
However, the prior art cannot determine the geometric mapping relationship between two under-screen fingerprint images with limited areas, and further cannot determine whether the two under-screen fingerprint images are in the same fingerprint area.
In conclusion, the prior art has the technical problem that the geometric mapping relation between the finger print images under the screen cannot be determined.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method and an apparatus for processing an underscreen fingerprint image, and an electronic device, so as to alleviate a technical problem that a geometric mapping relationship between underscreen fingerprint images cannot be determined in the prior art.
In a first aspect, an embodiment of the present invention provides a method for processing an underscreen fingerprint image, including: acquiring a screen fingerprint image and an on-screen fingerprint image corresponding to the screen fingerprint image; determining a first geometric mapping relation between any two on-screen fingerprint images; determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; the second geometric mapping relation is unique; and determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation.
Further, the step of determining the first geometric mapping relationship between any two on-screen fingerprint images comprises: and determining a first geometric mapping relation between any two on-screen fingerprint images by a fingerprint identification method.
Further, the step of determining a second geometric mapping relationship between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method comprises: acquiring a reference screen lower fingerprint image and a reference screen upper fingerprint image corresponding to the reference screen lower fingerprint image; the reference underscreen fingerprint image is an underscreen fingerprint image in which fingerprint venation can be accurately identified through human eyes; determining a first reference geometric mapping relation between the fingerprint images on any two reference screens by a fingerprint identification method; determining a second reference geometric mapping relation between any two reference underscreen fingerprint images; and obtaining the second geometric mapping relation by the hand-eye calibration method according to the first reference geometric mapping relation and the second reference geometric mapping relation.
Further, the step of determining the geometric mapping relationship between any two fingerprint images by a fingerprint identification method comprises the following steps: extracting the characteristics of key points of any two fingerprint images by adopting a minutiae columnar code; inputting the characteristics of the key points into an extended cluster model to obtain matched key point pairs; fitting the key point pairs by a least square method to obtain the geometric mapping relation; the two arbitrary fingerprint images are two arbitrary on-screen fingerprint images or two arbitrary reference on-screen fingerprint images, and when the two arbitrary fingerprint images are two arbitrary on-screen fingerprint images, the geometric mapping relationship is the first geometric mapping relationship; and when the any two fingerprint images are any two reference under-screen fingerprint images, the geometric mapping relation is a first reference geometric mapping relation.
Further, the step of determining the target geometric mapping relationship between any two underscreen fingerprint images based on the first geometric mapping relationship and the second geometric mapping relationship includes: converting the formula Mb ═ XMaX according to the mapping relation-1Calculating the target geometric mapping relation; mb represents the target geometric mapping relationship, X represents the second geometric mapping relationship, and Ma represents the first geometric mapping relationship.
Further, after obtaining the target geometric mapping relationship, the method further includes: taking the multiple pairs of the under-screen fingerprint images and the corresponding target geometric mapping relation of each pair of the under-screen fingerprint images as training samples; training an original geometric mapping relation detection model through the training sample to obtain a geometric mapping relation detection model; the geometric mapping relation detection model is used for detecting geometric mapping relations among the fingerprint images under the screen.
Further, after obtaining the target geometric mapping relationship, the method further includes: according to the target geometric mapping relation, carrying out coordinate transformation on pixel points in any one of the two target underscreen fingerprint images to obtain a transformed target underscreen fingerprint image; the two target under-screen fingerprint images are two under-screen fingerprint images corresponding to the target geometric mapping relation; and comparing the transformed target under-screen fingerprint image with a target under-screen fingerprint image which is not subjected to coordinate transformation in the two target under-screen fingerprint images, and determining whether the two target under-screen fingerprint images are images of the same fingerprint area.
Further, the method further comprises: acquiring a plurality of under-screen fingerprint images to be identified; detecting any two to-be-identified under-screen fingerprint images in the multiple to-be-identified under-screen fingerprint images through the geometric mapping relation detection model to obtain a geometric mapping relation between the any two to-be-identified under-screen fingerprint images; according to the geometric mapping relation, carrying out coordinate transformation on pixel points in any one of the any two to-be-identified underscreen fingerprint images to obtain a transformed to-be-identified underscreen fingerprint image; and comparing the transformed to-be-identified underscreen fingerprint image with the to-be-identified underscreen fingerprint image which is not subjected to coordinate transformation in the any two to-be-identified underscreen fingerprint images, and determining whether the any two to-be-identified underscreen fingerprint images are images of the same fingerprint area.
Further, the under-screen fingerprint image is obtained by shooting the fingerprint of the user by an under-screen fingerprint acquisition module when the user presses the screen; the fingerprint image on the screen is obtained by shooting the fingerprint residue on the screen after the user presses the screen.
In a second aspect, an embodiment of the present invention further provides a device for processing an off-screen fingerprint image, including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an off-screen fingerprint image and an on-screen fingerprint image corresponding to the off-screen fingerprint image; the first determining unit is used for determining a first geometric mapping relation between any two on-screen fingerprint images; the second determining unit is used for determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; the second geometric mapping relation is unique; and the third determining unit is used for determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium, on which a computer program is stored, and when the computer program runs on a computer, the computer executes the steps of executing the method of any one of the first aspect.
In the embodiment of the invention, a screen fingerprint image and an on-screen fingerprint image corresponding to the screen fingerprint image are obtained firstly; then, determining a first geometric mapping relation between any two on-screen fingerprint images; determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; and finally, determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation. According to the description, the method can determine the target geometric mapping relation between any two under-screen fingerprint images, further determine whether the two under-screen fingerprint images are images of the same fingerprint area, and solve the technical problem that the geometric mapping relation between the under-screen fingerprint images cannot be determined in the prior art.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for processing an underscreen fingerprint image according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for determining a first geometric mapping relationship between any two on-screen fingerprint images according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for determining a second geometric mapping relationship between a coordinate system of an on-screen fingerprint image and a coordinate system of an off-screen fingerprint image by a hand-eye calibration method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for training a geometric mapping relationship detection model according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for identifying an image of an underscreen fingerprint according to an embodiment of the present invention;
FIG. 7 is a flowchart of another method for identifying an image of an underscreen fingerprint according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an apparatus for processing an underscreen fingerprint image according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
first, an electronic device 100 for implementing an embodiment of the present invention, which can be used to execute the processing method of an underscreen fingerprint image according to embodiments of the present invention, is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memories 104, an input device 106, an output device 108, and a camera 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), and an asic (application Specific Integrated circuit), and the processor 102 may be a Central Processing Unit (CPU) or other form of Processing Unit having data Processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The camera 110 is used to capture an off-screen fingerprint image and an on-screen fingerprint image corresponding to the off-screen fingerprint image, wherein, the under-screen fingerprint image collected by the camera and the on-screen fingerprint image corresponding to the under-screen fingerprint image are processed by the under-screen fingerprint image processing method to obtain the target geometric mapping relation between any two under-screen fingerprint images, for example, the camera may capture an image of a user's desired off-screen fingerprint and an image of an on-screen fingerprint corresponding to the off-screen fingerprint, then, the under-screen fingerprint image and the on-screen fingerprint image corresponding to the under-screen fingerprint image are processed by the under-screen fingerprint image processing method to obtain a target geometric mapping relationship between any two under-screen fingerprint images, and the camera can also store the shot images in the memory 104 for use by other components.
Exemplarily, the electronic device for implementing the method for processing an off-screen fingerprint image according to the embodiment of the present invention may be implemented as a smart mobile terminal such as a smart phone, a tablet computer, etc., and may also be implemented as any other device with computing capability.
Example 2:
according to an embodiment of the present invention, there is provided an embodiment of a method for processing an off-screen fingerprint image, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown.
Fig. 2 is a flowchart of a method for processing an off-screen fingerprint image according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step S202, acquiring a fingerprint image under the screen and an on-screen fingerprint image corresponding to the fingerprint image under the screen.
In the embodiment of the invention, the off-screen fingerprint image can be obtained by shooting the fingerprint of the user by the off-screen fingerprint acquisition module when the user presses the screen, and the on-screen fingerprint image is obtained by shooting the fingerprint residue on the screen by the controllable camera after the user presses the screen. And when the user presses the screen once, acquiring one under-screen fingerprint image and one on-screen fingerprint image corresponding to the under-screen fingerprint image.
In addition, the screen may be a mobile phone screen, and certainly, the screen may also be other screens.
Before gathering fingerprint image under the screen and the fingerprint image on the screen that corresponds with fingerprint image under the screen, clean the screen earlier, then, the user presses the screen, and then fingerprint acquisition module shoots user's fingerprint under the screen, obtains fingerprint image under the screen, accomplishes the screen and presses the back, remains through controllable camera to the fingerprint on the screen and shoots, and then obtains fingerprint image on the screen.
Step S204, determining a first geometric mapping relation between any two on-screen fingerprint images.
After the on-screen fingerprint images are obtained, a first geometric mapping relation between any two on-screen fingerprint images can be determined through a traditional fingerprint algorithm, and the first geometric mapping relation can be in the form of a mapping matrix.
And step S206, determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method.
And the second geometric mapping relation is unique and is associated with equipment corresponding to the screen.
Meanwhile, a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image is determined by a hand-eye calibration method, and the second geometric mapping relation can also be in the form of a mapping matrix. The second geometric mapping relation is associated with the equipment corresponding to the screen, and the second geometric mapping relation is unique.
The hand-eye calibration method is generally applied to the field of robots. According to the kinematics knowledge of the robot, the robot also refers to a multi-joint multi-degree-of-freedom mechanical arm, and the mechanical arm is driven by a plurality of rotating motors to realize the controllable positioning drive of the tail end of the robot. The robot itself has no sensor, and a method of operating a target by the robot according to an image obtained by a camera is called robot vision by artificially installing a camera on or near the robot and obtaining target coordinates by using the camera. However, in order to establish a relationship between the coordinate system of the camera (i.e. the eye of the robot) and the coordinate system of the robot (i.e. the hand of the robot), the coordinate system of the robot and the coordinate system of the camera must be calibrated, and the calibration process is called hand-eye calibration. The inventor applies the hand-eye calibration method to the invention, and can determine the second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image.
And S208, determining a target geometric mapping relation between any two underscreen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation.
And after the first geometric mapping relation and the second geometric mapping relation are obtained, further converting the mapping relations based on the first geometric mapping relation and the second geometric mapping relation, and further obtaining a target geometric mapping relation between any two underscreen fingerprint images.
In the embodiment of the invention, a screen fingerprint image and an on-screen fingerprint image corresponding to the screen fingerprint image are obtained firstly; then, determining a first geometric mapping relation between any two on-screen fingerprint images; determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; and finally, determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation. According to the description, the method can determine the target geometric mapping relation between any two under-screen fingerprint images, further determine whether the two under-screen fingerprint images are images of the same fingerprint area, and solve the technical problem that the geometric mapping relation between the under-screen fingerprint images cannot be determined in the prior art.
The foregoing briefly introduces the processing method of the fingerprint image under the screen of the present invention, and the details thereof are described in detail below.
In this embodiment, an implementation manner of determining the first geometric mapping relationship between any two on-screen fingerprint images in step S204 is provided, and includes: and determining a first geometric mapping relation between any two on-screen fingerprint images by a fingerprint identification method.
Referring to fig. 3, the step of determining the first geometric mapping relationship between any two on-screen fingerprint images by the fingerprint identification method includes:
and S301, extracting the characteristics of key points of any two on-screen fingerprint images by adopting the minutiae columnar codes.
The conventional fingerprint algorithm in step S204 may specifically be a fingerprint identification method, and specifically may be a fingerprint identification method of an on-site fingerprint, where in an optional real-time manner, the fingerprint identification method of the on-site fingerprint includes: MCC (Mini Cylinder Code) and ECM (Extended clique models).
In the implementation process, the feature of the key points of any two on-screen fingerprint images is extracted by the detail point columnar code. The feature of the keypoint may be used to describe a local feature of a fingerprint cross-point in an on-screen fingerprint image.
And S302, inputting the characteristics of the key points into the extended cluster model to obtain matched key point pairs.
After the characteristics of the key points are obtained, the characteristics of the key points are input into an extended cluster model, and the extended cluster model determines to obtain matched key point pairs by a method of finding a maximum sub-matrix under the compatibility limit. The number of the key point pairs is multiple, and each key point pair can correspond to one matrix.
And step S303, fitting the key point pairs by a least square method to obtain a first geometric mapping relation.
And finally, fitting the key point pairs by a least square method to obtain a first geometric mapping relation, wherein the first geometric mapping relation can correspond to all the key point pairs. In practice, the first geometric mapping relationship is a mapping matrix.
The above description details the process of determining a first geometric mapping between any two on-screen fingerprint images, and the following description details the process of determining a second geometric mapping.
In this embodiment, an implementation manner of determining the second geometric mapping relationship between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by the hand-eye calibration method in step S206 is shown, and with reference to fig. 4, the method includes the following steps:
step S401, a reference screen lower fingerprint image and a reference screen upper fingerprint image corresponding to the reference screen lower fingerprint image are obtained.
The definition of the reference under-screen fingerprint image meets a preset condition, for example, the fingerprint venation in the reference under-screen fingerprint image can be accurately identified through human eyes, and the under-screen fingerprint image has the characteristic of clear fingerprint venation.
Step S402, determining a first reference geometric mapping relation between any two fingerprint images on the reference screen by a fingerprint identification method.
The process is similar to the process from the step S301 to the step S303, and the characteristics of key points of any two fingerprint images on the reference screen are extracted by adopting the columnar coding of the thin nodes; then, inputting the extracted characteristics of the key points into an extended cluster model to obtain matched key point pairs; and finally, fitting the key point pairs by a least square method to obtain a first reference geometric mapping relation.
And S403, determining a second reference geometric mapping relation between any two reference underscreen fingerprint images.
Because the veins of the fingerprints in the reference underscreen fingerprint images are clear, the second reference geometric mapping relation between any two reference underscreen fingerprint images can be determined.
During implementation, any two reference under-screen fingerprint images can be detected through an original geometric mapping relation detection model with certain processing capacity to obtain a second reference geometric mapping relation, and then the obtained second reference geometric mapping relation is verified through human eyes to ensure the accuracy of the obtained second reference geometric mapping relation.
And S404, obtaining a second geometric mapping relation by a hand-eye calibration method according to the first reference geometric mapping relation and the second reference geometric mapping relation.
After the first reference geometric mapping relation and the second reference geometric mapping relation are obtained, the second geometric mapping relation can be obtained through reverse deduction by a hand-eye calibration method according to the obtained first reference geometric mapping relation and the second reference geometric mapping relation.
The above description describes the process of determining the second geometric mapping relationship in detail, and the following description describes the process of determining the target geometric mapping relationship between any two underscreen fingerprint images.
In this embodiment, given step S208, an implementation manner of determining a target geometric mapping relationship between any two underscreen fingerprint images based on the first geometric mapping relationship and the second geometric mapping relationship includes the following steps:
converting the formula Mb ═ XMaX according to the mapping relation-1Calculating a target geometric mapping relation; where Mb represents the target geometric mapping relationship, X represents the second geometric mapping relationship, and Ma represents the first geometric mapping relationship.
In an optional embodiment of the present invention, after obtaining the target geometric mapping relationship, referring to fig. 5, the method further comprises:
step S501, a plurality of pairs of under-screen fingerprint images and a target geometric mapping relation corresponding to each pair of under-screen fingerprint images are used as training samples.
Step S502, training an original geometric mapping relation detection model through a training sample to obtain a geometric mapping relation detection model; the geometric mapping relation detection model is used for detecting geometric mapping relations among the fingerprint images under the screen.
It should be noted that the original geometric mapping relationship detection model has a certain detection capability, and can detect the geometric mapping relationship between the underscreen fingerprint images with clear fingerprint veins.
The above process only introduces one application scenario of the target geometric mapping relationship, and another application scenario of the target geometric mapping relationship is described below.
In an optional embodiment of the present invention, after obtaining the target geometric mapping relationship, referring to fig. 6, the method further comprises:
step S601, according to the target geometric mapping relation, carrying out coordinate transformation on pixel points in any one of the two target underscreen fingerprint images to obtain a transformed target underscreen fingerprint image.
And the two target under-screen fingerprint images are two under-screen fingerprint images corresponding to the target geometric mapping relation.
Step S602, comparing the transformed target underscreen fingerprint image with a target underscreen fingerprint image that is not subjected to coordinate transformation in the two target underscreen fingerprint images, and determining whether the two target underscreen fingerprint images are images of the same fingerprint area.
Specifically, if two target underscreen fingerprint images are in the same fingerprint area, the converted target underscreen fingerprint image and a target underscreen fingerprint image which is not subjected to coordinate conversion can be aligned in a pixel level, when the target underscreen fingerprint image and the target underscreen fingerprint image which are not subjected to coordinate conversion are compared, the pixel value of the pixel point of the converted target underscreen fingerprint image and the pixel value of the pixel point of the corresponding target underscreen fingerprint image which is not subjected to coordinate conversion are subtracted, and if the difference value is smaller than a preset pixel threshold value after the subtraction, the two target underscreen fingerprint images are determined to be in the same fingerprint area; on the contrary, it is determined that the two target off-screen fingerprint images are not images of the same fingerprint area.
The foregoing provides a specific implementation of image recognition, and another implementation of image recognition is described below:
in an alternative embodiment of the present invention, referring to fig. 7, the method further comprises:
step S701, acquiring a plurality of under-screen fingerprint images to be identified;
step S702, detecting any two to-be-identified under-screen fingerprint images in the to-be-identified under-screen fingerprint images through a geometric mapping relation detection model to obtain a geometric mapping relation between the any two to-be-identified under-screen fingerprint images;
step S703, according to the geometric mapping relation, carrying out coordinate transformation on pixel points in any one of any two to-be-identified underscreen fingerprint images to obtain a transformed to-be-identified underscreen fingerprint image;
step S704, comparing the transformed to-be-identified underscreen fingerprint image with the to-be-identified underscreen fingerprint image that is not subjected to coordinate transformation in any two to-be-identified underscreen fingerprint images, and determining whether any two to-be-identified underscreen fingerprint images are images of the same fingerprint area.
The specific process in step S704 is similar to the specific process in step S602, and is not described herein again.
The method can determine the target geometric mapping relation between any two under-screen fingerprint images based on the on-screen fingerprint images, further determine whether the two under-screen fingerprint images are images of the same fingerprint area, and simultaneously train the original geometric mapping relation detection model according to the determined target geometric mapping relation and the under-screen fingerprint image corresponding to the target geometric mapping relation, so that the method is high in accuracy and good in practicability.
Example 3:
the embodiment of the present invention further provides a device for processing an underscreen fingerprint image, where the device for processing an underscreen fingerprint image is mainly used for executing the method for processing an underscreen fingerprint image provided in the foregoing content of the embodiment of the present invention, and the following description specifically describes the device for processing an underscreen fingerprint image provided in the embodiment of the present invention.
Fig. 8 is a schematic diagram of an apparatus for processing an underscreen fingerprint image according to an embodiment of the present invention, and as shown in fig. 8, the apparatus for processing an underscreen fingerprint image mainly includes: an obtaining unit 10, a first determining unit 20, a second determining unit 30 and a third determining unit 40, wherein:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an off-screen fingerprint image and an on-screen fingerprint image corresponding to the off-screen fingerprint image;
the first determining unit is used for determining a first geometric mapping relation between any two on-screen fingerprint images;
the second determining unit is used for determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; the second geometric mapping relation is unique;
and the third determining unit is used for determining a target geometric mapping relation between any two underscreen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation.
In the embodiment of the invention, a screen fingerprint image and an on-screen fingerprint image corresponding to the screen fingerprint image are obtained firstly; then, determining a first geometric mapping relation between any two on-screen fingerprint images; determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; and finally, determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation. According to the description, the method can determine the target geometric mapping relation between any two under-screen fingerprint images, further determine whether the two under-screen fingerprint images are images of the same fingerprint area, and solve the technical problem that the geometric mapping relation between the under-screen fingerprint images cannot be determined in the prior art.
Optionally, the first determining unit is further configured to: and determining a first geometric mapping relation between any two on-screen fingerprint images by a fingerprint identification method.
Optionally, the second determining unit is further configured to: acquiring a reference screen lower fingerprint image and a reference screen upper fingerprint image corresponding to the reference screen lower fingerprint image; the reference underscreen fingerprint image is an underscreen fingerprint image in which fingerprint veins can be accurately identified through human eyes; determining a first reference geometric mapping relation between the fingerprint images on any two reference screens by a fingerprint identification method; determining a second reference geometric mapping relation between any two reference underscreen fingerprint images; and obtaining a second geometric mapping relation by a hand-eye calibration method according to the first reference geometric mapping relation and the second reference geometric mapping relation.
Optionally, the first determining unit or the second determining unit is further configured to: extracting the characteristics of key points of any two screen fingerprint images by adopting a detail point columnar code; inputting the characteristics of the key points into the extended cluster model to obtain matched key point pairs; fitting the key point pairs by a least square method to obtain a geometric mapping relation; when the two random fingerprint images are the two random on-screen fingerprint images, the geometric mapping relation is a first geometric mapping relation; and when any two fingerprint images are any two reference under-screen fingerprint images, the geometric mapping relation is a first reference geometric mapping relation.
Optionally, the third determining unit is further configured to: converting the formula Mb ═ XMaX according to the mapping relation-1Calculating a target geometric mapping relation; mb represents the target geometric mapping relationship, X represents the second geometric mapping relationship, and Ma represents the first geometric mapping relationship.
Optionally, the apparatus is further configured to: taking the multiple pairs of the under-screen fingerprint images and the corresponding target geometric mapping relation of each pair of the under-screen fingerprint images as training samples; training the original geometric mapping relation detection model through a training sample to obtain a geometric mapping relation detection model; the geometric mapping relation detection model is used for detecting the geometric mapping relation between the fingerprint images under the screen.
Optionally, the apparatus is further configured to: according to the target geometric mapping relation, carrying out coordinate transformation on pixel points in any one of the two target underscreen fingerprint images to obtain a transformed target underscreen fingerprint image; the two target under-screen fingerprint images are two under-screen fingerprint images corresponding to the target geometric mapping relation; and comparing the transformed target underscreen fingerprint image with a target underscreen fingerprint image which is not subjected to coordinate transformation in the two target underscreen fingerprint images, and determining whether the two target underscreen fingerprint images are images of the same fingerprint area.
Optionally, the apparatus is further configured to: acquiring a plurality of under-screen fingerprint images to be identified; detecting any two to-be-identified under-screen fingerprint images in the multiple to-be-identified under-screen fingerprint images through a geometric mapping relation detection model to obtain a geometric mapping relation between the any two to-be-identified under-screen fingerprint images; according to the geometric mapping relation, carrying out coordinate transformation on pixel points in any one of any two to-be-identified underscreen fingerprint images to obtain a transformed to-be-identified underscreen fingerprint image; and comparing the transformed to-be-identified underscreen fingerprint image with the to-be-identified underscreen fingerprint image which is not subjected to coordinate transformation in any two to-be-identified underscreen fingerprint images, and determining whether the any two to-be-identified underscreen fingerprint images are images of the same fingerprint area.
Optionally, the under-screen fingerprint image is obtained by shooting a fingerprint of the user by the under-screen fingerprint acquisition module when the user presses the screen; the fingerprint image on the screen is obtained by shooting the fingerprint residue on the screen after the user presses the screen.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
In another implementation of the present invention, there is further provided a computer storage medium having a computer program stored thereon, the computer program, when executed by a computer, performing the steps of the method of any one of the above method embodiments 2.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A method for processing an underscreen fingerprint image is characterized by comprising the following steps:
acquiring a screen lower fingerprint image and a screen upper fingerprint image corresponding to the screen lower fingerprint image, wherein the screen upper fingerprint image is obtained by shooting fingerprint residues on a screen after a user presses the screen;
determining a first geometric mapping relation between any two on-screen fingerprint images;
determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; the second geometric mapping relation is unique;
and determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation.
2. The method of claim 1, wherein the step of determining a first geometric mapping between any two on-screen fingerprint images comprises:
and determining a first geometric mapping relation between any two on-screen fingerprint images by a fingerprint identification method.
3. The method of claim 1, wherein the step of determining a second geometric mapping relationship between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method comprises:
acquiring a reference screen lower fingerprint image and a reference screen upper fingerprint image corresponding to the reference screen lower fingerprint image; the reference underscreen fingerprint image is an underscreen fingerprint image in which fingerprint venation can be accurately identified through human eyes;
determining a first reference geometric mapping relation between the fingerprint images on any two reference screens by a fingerprint identification method;
determining a second reference geometric mapping relation between any two reference underscreen fingerprint images;
and obtaining the second geometric mapping relation by the hand-eye calibration method according to the first reference geometric mapping relation and the second reference geometric mapping relation.
4. A method according to claim 2 or 3, characterized in that the step of determining the geometrical mapping between any two fingerprint images by means of a fingerprint identification method comprises:
extracting the characteristics of key points of any two fingerprint images by adopting a minutiae columnar code;
inputting the characteristics of the key points into an extended cluster model to obtain matched key point pairs;
fitting the key point pairs by a least square method to obtain the geometric mapping relation;
the two arbitrary fingerprint images are two arbitrary on-screen fingerprint images or two arbitrary reference on-screen fingerprint images, and when the two arbitrary fingerprint images are two arbitrary on-screen fingerprint images, the geometric mapping relationship is the first geometric mapping relationship; and when the any two fingerprint images are any two reference under-screen fingerprint images, the geometric mapping relation is a first reference geometric mapping relation.
5. The method of claim 1, wherein the step of determining a target geometric mapping relationship between any two underscreen fingerprint images based on the first geometric mapping relationship and the second geometric mapping relationship comprises:
converting the equation according to the mapping relation
Figure 864049DEST_PATH_IMAGE001
Calculating the target geometric mapping relation;
Figure 446209DEST_PATH_IMAGE002
representing the geometric mapping relationship of the object,
Figure 260581DEST_PATH_IMAGE003
representing the second geometric mapping relationship,
Figure 20727DEST_PATH_IMAGE004
representing the first geometric mapping.
6. The method according to any one of claims 1-3, 5, wherein after obtaining the target geometric mapping relationship, the method further comprises:
taking the multiple pairs of the under-screen fingerprint images and the corresponding target geometric mapping relation of each pair of the under-screen fingerprint images as training samples;
training an original geometric mapping relation detection model through the training sample to obtain a geometric mapping relation detection model; the geometric mapping relation detection model is used for detecting geometric mapping relations among the fingerprint images under the screen.
7. The method according to any one of claims 1-3, 5, wherein after obtaining the target geometric mapping relationship, the method further comprises:
according to the target geometric mapping relation, carrying out coordinate transformation on pixel points in any one of the two target underscreen fingerprint images to obtain a transformed target underscreen fingerprint image; the two target under-screen fingerprint images are two under-screen fingerprint images corresponding to the target geometric mapping relation;
and comparing the transformed target under-screen fingerprint image with a target under-screen fingerprint image which is not subjected to coordinate transformation in the two target under-screen fingerprint images, and determining whether the two target under-screen fingerprint images are images of the same fingerprint area.
8. The method of claim 6, further comprising:
acquiring a plurality of under-screen fingerprint images to be identified;
detecting any two to-be-identified under-screen fingerprint images in the multiple to-be-identified under-screen fingerprint images through the geometric mapping relation detection model to obtain a geometric mapping relation between the any two to-be-identified under-screen fingerprint images;
according to the geometric mapping relation, carrying out coordinate transformation on pixel points in any one of the any two to-be-identified underscreen fingerprint images to obtain a transformed to-be-identified underscreen fingerprint image;
and comparing the transformed to-be-identified underscreen fingerprint image with the to-be-identified underscreen fingerprint image which is not subjected to coordinate transformation in the any two to-be-identified underscreen fingerprint images, and determining whether the any two to-be-identified underscreen fingerprint images are images of the same fingerprint area.
9. The method of claim 1,
and when the fingerprint image under the screen is obtained by shooting the fingerprint of the user by the fingerprint acquisition module under the screen when the user presses the screen.
10. An apparatus for processing an underscreen fingerprint image, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a fingerprint image under a screen and an on-screen fingerprint image corresponding to the fingerprint image under the screen, and the on-screen fingerprint image is obtained by shooting fingerprint residues on the screen after a user presses the screen;
the first determining unit is used for determining a first geometric mapping relation between any two on-screen fingerprint images;
the second determining unit is used for determining a second geometric mapping relation between the coordinate system of the on-screen fingerprint image and the coordinate system of the off-screen fingerprint image by a hand-eye calibration method; the second geometric mapping relation is unique;
and the third determining unit is used for determining a target geometric mapping relation between any two under-screen fingerprint images based on the first geometric mapping relation and the second geometric mapping relation.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of the preceding claims 1 to 9 when executing the computer program.
12. A computer storage medium, having a computer program stored thereon, which, when executed by a computer, performs the steps of the method of any of claims 1 to 9.
CN201911058968.9A 2019-10-31 2019-10-31 Method and device for processing fingerprint image under screen and electronic equipment Active CN110807423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911058968.9A CN110807423B (en) 2019-10-31 2019-10-31 Method and device for processing fingerprint image under screen and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911058968.9A CN110807423B (en) 2019-10-31 2019-10-31 Method and device for processing fingerprint image under screen and electronic equipment

Publications (2)

Publication Number Publication Date
CN110807423A CN110807423A (en) 2020-02-18
CN110807423B true CN110807423B (en) 2022-04-22

Family

ID=69500966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911058968.9A Active CN110807423B (en) 2019-10-31 2019-10-31 Method and device for processing fingerprint image under screen and electronic equipment

Country Status (1)

Country Link
CN (1) CN110807423B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663685A (en) * 2012-03-19 2012-09-12 宁波大学 Geometric correction method based on nonlinearity
CN103793696A (en) * 2014-02-12 2014-05-14 北京海鑫科金高科技股份有限公司 Method and system for identifying fingerprints
CN110036393A (en) * 2018-04-13 2019-07-19 华为技术有限公司 Fingerprint recognition terminal under a kind of screen

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663685A (en) * 2012-03-19 2012-09-12 宁波大学 Geometric correction method based on nonlinearity
CN103793696A (en) * 2014-02-12 2014-05-14 北京海鑫科金高科技股份有限公司 Method and system for identifying fingerprints
CN110036393A (en) * 2018-04-13 2019-07-19 华为技术有限公司 Fingerprint recognition terminal under a kind of screen

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
屏上指纹设计方案;祝正易;《光电子技术》;20200630;第40卷(第2期);全文 *

Also Published As

Publication number Publication date
CN110807423A (en) 2020-02-18

Similar Documents

Publication Publication Date Title
WO2021057848A1 (en) Network training method, image processing method, network, terminal device and medium
CN111340864A (en) Monocular estimation-based three-dimensional scene fusion method and device
CN109151442B (en) Image shooting method and terminal
CN111507200A (en) Body temperature detection method, body temperature detection device and dual-optical camera
US11501431B2 (en) Image processing method and apparatus and neural network model training method
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
CN105005972A (en) Shooting distance based distortion correction method and mobile terminal
EP4030749B1 (en) Image photographing method and apparatus
CN107564020B (en) Image area determination method and device
CN107272899B (en) VR (virtual reality) interaction method and device based on dynamic gestures and electronic equipment
CN111274999A (en) Data processing method, image processing method, device and electronic equipment
CN109711287B (en) Face acquisition method and related product
CN112560791B (en) Recognition model training method, recognition method and device and electronic equipment
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN110956131B (en) Single-target tracking method, device and system
CN113454684A (en) Key point calibration method and device
CN110807423B (en) Method and device for processing fingerprint image under screen and electronic equipment
CN110717441B (en) Video target detection method, device, equipment and medium
CN109547678B (en) Processing method, device, equipment and readable storage medium
CN109871814B (en) Age estimation method and device, electronic equipment and computer storage medium
CN111401285B (en) Target tracking method and device and electronic equipment
CN115439875A (en) Posture evaluation device, method and system
CN113298122A (en) Target detection method and device and electronic equipment
CN114170652A (en) Face image detection method and device, terminal equipment and storage medium
CN112613383A (en) Joint point detection method, posture recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230323

Address after: No. S, 17/F, No. 1, Zhongguancun Street, Haidian District, Beijing 100082

Patentee after: Beijing Jigan Technology Co.,Ltd.

Address before: 316-318, block a, Rongke Information Center, No.2, south academy of Sciences Road, Haidian District, Beijing

Patentee before: MEGVII (BEIJING) TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230712

Address after: 300462 201-1, Floor 2, Building 4, No. 188, Rixin Road, Binhai Science Park, Binhai, Tianjin

Patentee after: Tianjin Jihao Technology Co.,Ltd.

Address before: No. S, 17/F, No. 1, Zhongguancun Street, Haidian District, Beijing 100082

Patentee before: Beijing Jigan Technology Co.,Ltd.

TR01 Transfer of patent right