CN116978067A - Fingerprint identification method and device - Google Patents

Fingerprint identification method and device Download PDF

Info

Publication number
CN116978067A
CN116978067A CN202210412163.5A CN202210412163A CN116978067A CN 116978067 A CN116978067 A CN 116978067A CN 202210412163 A CN202210412163 A CN 202210412163A CN 116978067 A CN116978067 A CN 116978067A
Authority
CN
China
Prior art keywords
fingerprint
loss function
image
model
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210412163.5A
Other languages
Chinese (zh)
Inventor
邸皓轩
谢字希
李丹洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210412163.5A priority Critical patent/CN116978067A/en
Publication of CN116978067A publication Critical patent/CN116978067A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A fingerprint identification method and device are applied to the technical field of fingerprint identification. The method comprises the following steps: receiving a first operation of a user, wherein the first operation is used for triggering fingerprint identification; responding to the first operation, acquiring a first fingerprint, and carrying out fingerprint identification based on the first fingerprint; when fingerprint identification fails, repairing the first fingerprint by using a first repairing model to obtain a second fingerprint; the first repair model is obtained by model training based on fingerprint image data pairs, a plurality of loss functions are determined in the model training process, the loss functions comprise loss functions generated based on global features, the fingerprint image data pairs comprise data of third fingerprints and fourth fingerprints with corresponding relations, and the quality of the fourth fingerprints is better than that of the third fingerprints; fingerprint identification is carried out based on the second fingerprint, so that a fingerprint image can be effectively repaired, the success rate of fingerprint identification is improved, and the fingerprint identification experience of a user is greatly improved.

Description

Fingerprint identification method and device
Technical Field
The application relates to the field of biological recognition, and in particular relates to a fingerprint recognition method and device.
Background
Along with the popularization of intelligent terminals, fingerprint identification technology is also rapidly developed in the field of terminals. In a fingerprint application scenario, a situation that the acquired fingerprint quality is poor often occurs. Poor quality of the collected fingerprints can influence the success rate of fingerprint identification. The prior art recovers acquired fingerprint images to obtain high-quality fingerprints. However, the existing fingerprint repairing method has limitations, so that the fingerprint identification rate cannot be effectively improved, and the user experience is affected. Therefore, how to repair the acquired fingerprint image is a problem to be solved.
Disclosure of Invention
In view of the above, the present application provides a fingerprint identification method, apparatus, computer readable storage medium and computer program product, which can effectively repair fingerprint images, improve the success rate of fingerprint identification, and greatly improve the fingerprint identification experience of users.
In a first aspect, a method for fingerprint identification is provided, including:
receiving a first operation of a user, wherein the first operation is used for triggering fingerprint identification;
responding to the first operation, acquiring a first fingerprint, and carrying out fingerprint identification based on the first fingerprint;
when fingerprint identification fails, repairing the first fingerprint by using a first repairing model to obtain a second fingerprint;
The first repair model is obtained by model training based on fingerprint image data pairs, a plurality of loss functions are determined in the model training process, the loss functions comprise loss functions generated based on global features, the fingerprint image data pairs comprise data of a third fingerprint and a fourth fingerprint with a corresponding relation, and the quality of the fourth fingerprint is better than that of the third fingerprint;
and fingerprint identification is carried out based on the second fingerprint.
The above method may be performed by the terminal device or a chip in the terminal device. Based on the scheme, when fingerprint identification is unsuccessful, the first fingerprint is repaired through the first repair model, so that a second fingerprint is obtained, namely, the repaired fingerprint is higher in image quality, the repaired fingerprint is utilized for identification, the success rate of fingerprint identification can be improved, and the user fingerprint identification experience is improved. The first repairing model is a training model obtained by considering a plurality of loss functions, and the repaired fingerprint image can be better evaluated by considering the plurality of loss functions, so that the fingerprint image repaired by the first repairing model is smoother, and the success rate of fingerprint identification is greatly improved.
In one possible implementation, the training network of the first repair model includes a global averaging pooling layer for fusing feature information of the fingerprint image and a scaling layer for scaling a feature map of the fingerprint image.
Thus, by adding a global averaging pooling layer in the training network, it helps to reduce the number of parameters, resulting in faster run-time.
In one possible implementation, the plurality of loss functions are determined based on a first output result and the fourth fingerprint, wherein the first output result is: and inputting the third fingerprint into a training network of the first repair model during model training.
In one possible implementation, the plurality of loss functions includes one or more of the following: overall change loss function, image structure multiscale loss function, gradient loss function, generate loss function. Thus, by taking into account the individual loss functions of the global features, it is helpful to improve the accuracy of the fingerprint image.
In one possible implementation, the overall change loss function satisfies the following equation:
Wherein L is TV Is the overall change TV loss function, F w (I s ) Representing the first output result, C represents the length of the image, H represents the height of the image, W represents the width of the image,an operation representing the square of the 2 norms, +.>Representing the gradient vector in the X-direction,representing the gradient vector in the y-direction.
In one possible implementation, the image structure multiscale loss function satisfies the following equation:
wherein SSIM (x, y) represents the image structure multiscale loss function, μ x Is the average value of x, mu y Is the average value of y and is,is the variance of x>Is the variance of y, sigma xy Is the covariance of x and y, c 1 And c 2 Is two variables that remain stable.
In one possible implementation, the gradient loss function satisfies the following equation:
L gradient of =Δx+Δy
Wherein L is Gradient of Represents the gradient loss function, Δx represents the gradient in the x-direction, and Δy represents the gradient in the y-direction.
In one possible implementation, the generation loss function satisfies the following equation:
L GAN =-∑log D(I t ,F w (I s ))
wherein L is GAN Representing the GAN loss function, D representing the arbiter in the generative model, F w (I s ) Representing the first output result, I t An image representing a fourth fingerprint.
In one possible implementation, the plurality of loss functions further includes one or more of the following: a norm loss function, a feature loss function.
Alternatively, the feature loss function may include style loss and content loss, which may be implemented by a VGG loss function.
In one possible implementation, the fingerprint image data pair is obtained by generating an anti-network degradation model in two ways, wherein the two-way generation anti-network degradation model is trained based on training data.
Compared with the manual selection of the fingerprint image data pair for model training, the fingerprint image data pair is generated in a mode of generating the countermeasure network model in two ways, and the acquisition cost of training data is reduced.
In a second aspect, there is provided a method of model training, the method comprising:
acquiring a fingerprint image data pair, wherein the fingerprint image data pair comprises a third fingerprint and a fourth fingerprint, and the fourth fingerprint is superior to the third fingerprint;
inputting a third fingerprint into a training network of the first repair model in the model training process to obtain a first output result;
determining a plurality of loss functions based on the first output result and a fourth fingerprint, wherein the loss functions at least comprise loss functions generated based on global features;
training through the multiple loss functions to obtain the first repair model.
The above method may be performed by a model training apparatus. Based on the scheme, model training is performed through the fingerprint image data pairs, a plurality of loss functions are determined based on the first output result and the fourth fingerprint, a first repairing model with higher precision is finally obtained, the repaired fingerprint image can be better evaluated by considering the plurality of loss functions, the fingerprint image repaired through the first repairing model can be smoother, and the success rate of fingerprint identification is greatly improved.
The relevant descriptions regarding the first repair model have been mentioned in the first aspect, and these descriptions are equally applicable to the second aspect, and are not repeated here for brevity.
It will be appreciated that the first repair model described above may be applied to the fingerprint identification method of the first aspect.
In a third aspect, there is provided an apparatus for fingerprint identification, comprising means for performing any of the methods of the first aspect. The device may be a terminal (or a terminal device) or may be a chip in the terminal (or the terminal device). The device comprises an input unit, a display unit and a processing unit.
When the apparatus is a terminal, the processing unit may be a processor, the input unit may be a communication interface, and the display unit may be a graphic processing module and a screen; the terminal may further comprise a memory for storing computer program code which, when executed by the processor, causes the terminal to perform any of the methods of the first aspect.
When the device is a chip in the terminal, the processing unit may be a logic processing unit in the chip, the input unit may be an output interface, a pin, a circuit, or the like, and the display unit may be a graphics processing unit in the chip; the chip may also include memory, which may be memory within the chip (e.g., registers, caches, etc.), or memory external to the chip (e.g., read-only memory, random access memory, etc.); the memory is for storing computer program code which, when executed by the processor, causes the chip to perform any of the methods of the first aspect.
In a fourth aspect, there is provided an apparatus for model training for performing the method of the second aspect or any possible implementation of the second aspect. In particular, the apparatus comprises means for performing the method of the second aspect described above or any possible implementation of the second aspect.
In a fifth aspect, an apparatus for model training is provided. The apparatus includes a processor, a memory, and a communication interface. The processor is connected with the memory and the communication interface. The memory is used for storing instructions, the processor is used for executing the instructions, and the communication interface is used for communicating with other network elements under the control of the processor. The processor, when executing the memory-stored instructions, causes the processor to perform the method of the second aspect or any possible implementation of the second aspect.
In a sixth aspect, there is provided a computer readable storage medium storing computer program code which, when run by a fingerprinted apparatus, causes the apparatus to perform any one of the methods of the first aspect.
In a seventh aspect, there is provided a computer readable storage medium storing computer program code which, when run by a model trained apparatus, causes the apparatus to perform any one of the methods of the second aspect.
In an eighth aspect, there is provided a computer program product comprising: computer program code which, when run by a fingerprinted apparatus, causes the apparatus to perform any of the methods of the first aspect.
In a ninth aspect, there is provided a computer program product comprising: computer program code which, when run by a model trained apparatus, causes the apparatus to perform any of the methods of the second aspect.
Drawings
FIG. 1 is an exemplary diagram of an application scenario of an embodiment of the present application;
FIG. 2 is a schematic diagram of a hardware system suitable for use with the electronic device of the present application;
FIG. 3 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
FIG. 4 is a schematic flow chart of a method of fingerprint identification according to an embodiment of the application;
FIG. 5 is a schematic flow chart of a training process of a first repair model of an embodiment of the application;
FIG. 6 is a schematic diagram of a network design of a neural network according to an embodiment of the present application;
FIG. 7 is an exemplary graph of the effect contrast of a fingerprint repair image;
FIG. 8 is a schematic diagram of an application scenario in which embodiments of the present application are applied;
fig. 9 is a schematic block diagram of a fingerprint recognition device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
The fingerprint identification method provided by the embodiment of the application can be applied to the electronic equipment with the fingerprint identification function. For example, the electronic device may be a mobile phone, a tablet computer, a notebook computer, a wearable device, a multimedia playing device, an electronic book reader, a personal computer, a Personal Digital Assistant (PDA), a netbook, an Augmented Reality (AR) device, a Virtual Reality (VR) device, or the like. The present application is not limited to the specific form of the electronic device.
By way of example and not limitation, when the electronic device is a wearable device, the wearable device may be a generic name that applies wearable technology to intelligently design daily wear, develop wearable devices, such as eyeglasses, gloves, watches, apparel, shoes, and the like. The wearable device is a portable device that is worn directly on the human body or integrated into the clothing or accessories of the user, and can collect the biometric data of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. In one implementation, wearable smart devices include devices that are fully functional, large in size, and that can perform complete or partial functions independent of a smart phone, such as smart watches or smart glasses. In another implementation, the wearable smart device may be a device that is focused only on certain types of application functions and that needs to be used with other devices (e.g., smartphones), such as smartphones, smartphones containing unlocked touch screens, and the like.
The embodiment of the application does not limit the application scene of fingerprint identification in particular, and the scene related to the identification by utilizing the fingerprint can be applicable. For example, the user uses the fingerprint for unlocking, payment, or authentication, etc.
It will be appreciated that the embodiments of the present application are applicable to scenarios involving fingerprint image processing. For example, ultrasonic fingerprinting, optical fingerprinting, off-screen fingerprinting, post-screen fingerprinting (i.e., the fingerprinting device is disposed on the back of the electronic device).
In general, it is limited by one or more of the following factors: the degree of dryness, humidity, pressure, breakage, sweat, the clean degree of fingerprint sensor etc. when carrying out fingerprint identification, the user can have the condition that the fingerprint quality of gathering is not good. When the quality of the collected fingerprint is poor, fingerprint identification is unsuccessful, and fingerprint identification experience of a user is affected.
Fig. 1 is a schematic diagram of an application scenario according to an embodiment of the present application. Taking the example that the electronic device is a mobile phone, the mobile phone adopts under-screen fingerprint unlocking, as shown in (1) in fig. 1, a user presses a fingerprint unlocking area 10 of the screen by a finger to attempt fingerprint unlocking. After the user presses the fingerprint unlocking area 10, the mobile phone matches the collected fingerprint with the fingerprint stored in advance by the user. If the matching is successful, the mobile phone screen is successfully unlocked.
It should be understood that the fingerprint unlocking area 10 shown in fig. 1 (1) is only exemplary, and embodiments of the present application are not limited thereto. In fact, the fingerprint unlocking area 10 may be located in other areas of the screen, such as the area of the screen near the power key.
It should also be understood that the fingerprint unlocking shown in (1) in fig. 1 is described by taking the example of the off-screen fingerprint unlocking, and the embodiment of the present application is not limited thereto. For example, the embodiment of the application is also suitable for unlocking the fingerprint on the back of the mobile phone.
If the user has not successfully matched the fingerprints after a plurality of attempts, the user is prompted that the fingerprint matching fails. One possible scenario is for example, as shown in fig. 1 (2), the handset displays a prompt box 11 "fingerprint matching failed, please keep the finger and sensor clean" to the user.
It should be understood that the content in the prompt box 11 is merely illustrative, and the embodiments of the present application are not limited thereto. For example, the content in the prompt box 11 may be "fingerprint verification has failed a maximum number of times, please unlock using a password", or "fingerprint matching has failed".
If the user has not successfully matched the fingerprints after a plurality of attempts, the user can be prompted to input a password for unlocking. For example, as shown in fig. 1 (3), the mobile phone displays an interface for a user to input a password. Of course, the interface shown in (3) of fig. 1 is only one possible scenario, and embodiments of the present application are not limited thereto.
It should be understood that the scenario in fig. 1 is only a schematic illustration of an application scenario of the present application, which is not limited to the embodiment of the present application, and the present application is not limited thereto.
In order to improve user fingerprint identification experience, the embodiment of the application provides a fingerprint identification method, wherein fingerprints which fail to be identified are repaired through a first repair model, and the repaired fingerprints are used for fingerprint identification so as to improve the success rate of fingerprint identification. In the embodiment of the application, the first repair model is a training model obtained by considering a plurality of loss functions, and the repaired fingerprint image can be better evaluated by considering the plurality of loss functions, so that the precision is higher, the fingerprint image repaired by the first repair model is smoother, and the success rate of fingerprint identification is greatly improved.
The following first describes a hardware system and a software architecture to which the embodiments of the present application are applicable with reference to fig. 2 and 3.
Fig. 2 shows a hardware system of an electronic device suitable for use in the application.
The electronic device 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, etc., and the specific type of the electronic device 100 is not limited in the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 2 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the application, electronic device 100 may include more or fewer components than those shown in FIG. 2, or electronic device 100 may include a combination of some of the components shown in FIG. 2, or electronic device 100 may include sub-components of some of the components shown in FIG. 2. For example, the proximity light sensor 180G shown in fig. 2 may be optional. The components shown in fig. 2 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The connection relationships between the modules shown in fig. 2 are merely illustrative, and do not constitute a limitation on the connection relationships between the modules of the electronic device 100. Alternatively, the modules of the electronic device 100 may also use a combination of the various connection manners in the foregoing embodiments.
The electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 may be used to display images or video. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini light-emitting diode (Mini LED), a Micro light-emitting diode (Micro LED), a Micro OLED (Micro OLED), or a quantum dot LED (quantum dot light emitting diodes, QLED). In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, etc. format image signal. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
The electronic device 100 may implement audio functions, such as music playing and recording, through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates with conductive material, and when a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes, and the electronic device 100 determines the strength of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic apparatus 100 detects the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The proximity light sensor 180G may include, for example, a light-emitting diode (LED) and a light detector, for example, a photodiode. The LED may be an infrared LED. The electronic device 100 emits infrared light outward through the LED. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When the reflected light is detected, the electronic device 100 may determine that an object is present nearby. When no reflected light is detected, the electronic device 100 may determine that there is no object nearby. The electronic device 100 can use the proximity light sensor 180G to detect whether the user holds the electronic device 100 close to the ear for talking, so as to automatically extinguish the screen for power saving. The proximity light sensor 180G may also be used for automatic unlocking and automatic screen locking in holster mode or pocket mode. It should be appreciated that the proximity light sensor 180G described in fig. 2 may be an optional component. In some scenarios, an ultrasonic sensor may be utilized in place of proximity sensor 180G to detect proximity light.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
The touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different location than the display 194.
The keys 190 include a power-on key and an volume key. The keys 190 may be mechanical keys or touch keys. The electronic device 100 may receive a key input signal and implement a function related to the case input signal.
The motor 191 may generate vibration. The motor 191 may be used for incoming call alerting as well as for touch feedback. The motor 191 may generate different vibration feedback effects for touch operations acting on different applications. The motor 191 may also produce different vibration feedback effects for touch operations acting on different areas of the display screen 194. Different application scenarios (e.g., time alert, receipt message, alarm clock, and game) may correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is described below. The software system may employ a layered architecture, an event driven architecture, a microkernel architecture, a micro-service architecture, or a cloud architecture, and the embodiment of the present application exemplarily describes the software system of the electronic device 100.
As shown in fig. 3, the software system using the layered architecture is divided into several layers, each of which has a clear role and division. The layers communicate with each other through a software interface. In some embodiments, the software system may be divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows (Android run) and system libraries, a kernel layer, and a trusted execution environment (trusted execution environment, TEE) layer, respectively.
The application layer may include camera, gallery, calendar, conversation, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer may include some predefined functions.
For example, the application framework layer includes a window manager, a content provider, a view system, a telephony manager, a resource manager, and a notification manager.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there are status bars, lock screens, and intercept screens.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, and phonebooks.
The view system includes visual controls, such as controls to display text and controls to display pictures. The view system may be used to build applications. The display interface may be composed of one or more views, for example, a display interface including a text notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide communication functions of the electronic device 100, such as management of call status (on or off).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, and video files.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as a notification manager, is used for download completion notification and message alerting. The notification manager may also manage notifications that appear in the system top status bar in the form of charts or scroll bar text, such as notifications for applications running in the background. The notification manager may also manage notifications that appear on the screen in the form of dialog windows, such as prompting text messages in status bars, sounding prompts, vibrating electronic devices, and flashing lights.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing functions such as management of object life cycle, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., open graphics library (open graphics library for embedded systems, openGL ES) for embedded systems) and 2D graphics engines (e.g., skia graphics library (skia graphics library, SGL)).
The surface manager is used to manage the display subsystem and provides a fusion of the 2D and 3D layers for the plurality of applications.
The media library supports playback and recording of multiple audio formats, playback and recording of multiple video formats, and still image files. The media library may support a variety of audio video coding formats such as MPEG4, h.264, moving picture experts group audio layer 3 (moving picture experts group audio layer III, MP 3), advanced audio coding (advanced audio coding, AAC), adaptive multi-rate (AMR), joint picture experts group (joint photographic experts group, JPG), and portable network graphics (portable network graphics, PNG).
Three-dimensional graphics processing libraries may be used to implement three-dimensional graphics drawing, image rendering, compositing, and layer processing.
The two-dimensional graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer may include a display driver, a camera driver, an audio driver, a sensor driver, and the like.
The TEE layer can provide security services for the Android system. The TEE layer is used for executing various biometric algorithms. The TEE layer is typically used to run critical operations: (1), mobile payment: fingerprint verification, PIN code input and the like; (2), confidential data: secure storage of private keys, certificates, etc.; (3) the content includes: DRM (digital rights protection), etc.
In some possible embodiments, the TEE layer includes a fingerprint recognition algorithm module and a fingerprint repair module. Alternatively, the fingerprint repair module may be separately disposed in the TEE layer (for example, as shown in fig. 3), or may be located in the fingerprint recognition algorithm module, which is not limited in particular by the embodiment of the present application. In the embodiment of the application, the fingerprint restoration module is used for restoring fingerprints with failed recognition.
In some possible examples, the fingerprint repair module is configured to perform the fingerprint identification method according to the embodiments of the present application. Optionally, the fingerprint repair module repairs the fingerprint failed in recognition by using the first repair model.
It should be understood that the above illustrates the block diagram of the electronic device based on fig. 2, and the software architecture of the embodiment of the present application is illustrated by fig. 3, but the embodiment of the present application is not limited thereto.
The following describes a fingerprint recognition method according to an embodiment of the present application with reference to fig. 4 to 7. It is to be understood that the fingerprint recognition method shown below may be implemented in an electronic device (e.g., the electronic device shown in fig. 2) having the above-described hardware structure.
Fig. 4 is a schematic flow chart of a method of fingerprint identification according to an embodiment of the application. As shown in fig. 4, the method comprises the steps of:
in step 310, a first operation of a user is received, the first operation being used to trigger fingerprint recognition.
The embodiment of the present application is not limited to the specific form of the first operation.
The first operation may be, for example, an operation in which the user presses the fingerprint unlocking area by a finger in order to initiate fingerprint recognition, in preparation for subsequent fingerprint unlocking.
The first operation may be, for example, an operation in which a user places a finger in the fingerprint unlocking area in order to trigger the fingerprint recognition function.
In step 320, in response to the first operation, a first fingerprint is obtained and fingerprint identification is performed based on the first fingerprint.
In particular, acquiring the first fingerprint refers to acquiring data of the first fingerprint. The embodiment of the application is not limited to a specific form of fingerprint data, for example, the data of the first fingerprint includes a fingerprint image. As one possible implementation, acquiring the first fingerprint includes: and collecting fingerprints through a fingerprint collecting module.
Taking a fingerprint unlocking scene as an example for explanation, an under-screen fingerprint identification module is configured on a touch screen in the electronic equipment, and a fingerprint identification area of the under-screen fingerprint identification module can be arranged in a designated area of the touch screen or can cover the whole touch screen, so that specific information is not required. If the under-screen fingerprint identification module is arranged in the appointed area of the touch screen, the fingerprint unlocking flow is triggered when the user presses the appointed area. If the under-screen fingerprint identification module covers the whole touch screen, in the screen locking state of the electronic equipment, a user presses any area of the touch screen, and the fingerprint unlocking process is triggered.
The fingerprinting based on the first fingerprint may be implemented using current fingerprinting techniques. For example, information (e.g., fingerprint feature data, fingerprint images, etc.) of at least one standard fingerprint may be entered in advance in the electronic device. The electronic device performs fingerprint matching with the first fingerprint by using the standard fingerprint, if the fingerprint matching is successful, the next related processing can be performed, and the specific processing steps can depend on application scenarios, such as payment, unlocking and the like. If the fingerprint identification fails, the first repair model of the embodiment of the application is needed to be utilized for fingerprint repair.
And 330, repairing the first fingerprint by using a first repairing model when fingerprint identification fails, and obtaining a second fingerprint.
The first repair model is obtained by model training based on fingerprint image data pairs, and a plurality of loss functions are determined in the model training process, wherein the loss functions comprise loss functions generated based on global features; the fingerprint image data pair comprises data of a third fingerprint and a fourth fingerprint with a corresponding relation, and the quality of the fourth fingerprint is superior to that of the third fingerprint.
"repair" may be understood as the process of converting an image of a first fingerprint into an image of a second fingerprint.
The first repair model is used to repair a fingerprint (or low quality fingerprint) that failed to identify in order to obtain a high quality fingerprint (such as a second fingerprint). An image of the first fingerprint is input into the first repair model, and an image of the second fingerprint may be output. The quality of the fingerprint image repaired by the first repair model is better than that of the fingerprint before repair.
It should be understood that the name of the first repair model is not specifically limited in the embodiments of the present application. The first repair model may also be referred to as a fingerprint repair model, a fingerprint training model, a fingerprint feature repair model, and so on.
It should also be appreciated that the type of the first repair model is not particularly limited in this embodiment of the present application. For example, the first repair model may be a machine learning model, a deep learning model, a neural network model, a predictive model, and so on.
A plurality of loss functions are considered in the training process of the first repair model, and the loss functions comprise loss functions generated based on global features. Compared with a scheme of only considering local features (or local differences of pixels), the first restoration model of the embodiment of the application considers global features (or overall differences of images), and is beneficial to improving the image quality after fingerprint restoration.
In general, global features refer to the overall properties of an image, common global features including color features, texture features, and shape features, such as intensity histograms, and the like. Local features are features extracted from local areas of an image, including edges, corner points, lines, curves, areas of special properties, and the like.
In addition, the first repair model is obtained by learning and training fingerprint image data pairs. The fingerprint image data pair comprises data of a third fingerprint and a fourth fingerprint with a corresponding relation or a mapping relation, and the quality of the fourth fingerprint is better than that of the third fingerprint. In other words, the fingerprint image data pair refers to data of a high-quality fingerprint image and a low-quality fingerprint. Wherein the third fingerprint and the fourth fingerprint are a certain data pair (or paired fingerprint data) of the fingerprint image data pair. For example, the third fingerprint is a low quality fingerprint and the fourth fingerprint is a high quality fingerprint corresponding to the third fingerprint. It will be appreciated that the pairs of fingerprint image data comprise a plurality of pairs of fingerprint image data, only one of which is described as an example.
Step 340, fingerprint identification is performed based on the second fingerprint.
The second fingerprint is a restored fingerprint image. One possible implementation way uses the second fingerprint to perform fingerprint identification, and the fingerprint identification is successful. It should be appreciated that the process of fingerprint identification using fingerprints may refer to the prior art and will not be described in detail herein.
In the embodiment of the application, when fingerprint identification is unsuccessful, the first fingerprint is repaired through the first repairing model to obtain the second fingerprint, namely the repaired fingerprint, the image quality of the repaired fingerprint is higher, the repaired fingerprint is utilized for identification, the success rate of fingerprint identification can be improved, and the user fingerprint identification experience can be improved. The first repairing model is a training model obtained by considering a plurality of loss functions, and the repaired fingerprint image can be better evaluated by considering the plurality of loss functions, so that the fingerprint image repaired by the first repairing model is smoother, and the success rate of fingerprint identification is greatly improved.
The first repair model is obtained through deep learning training. The training process of the first repair model will be described below. Referring to fig. 5, fig. 5 is a schematic flow chart of a training process of a first repair model according to an embodiment of the application. As shown in fig. 5, includes:
Step 401, a pair of fingerprint image data is acquired, said pair of fingerprint image data comprising a third fingerprint and a fourth fingerprint, the fourth fingerprint being superior to the third fingerprint.
The fingerprint image data pair may be understood as data for model training. In other words, model training may be performed using the fingerprint image data pairs. For the description of the fingerprint image data pair, reference is made to the foregoing, and no further description is given here.
The fingerprint image data pair may be generated by a data generation model.
As one possible implementation, the fingerprint image data pairs are generated from generating an countermeasure network model (generative adversarial network, GAN). Compared with the manual selection of the fingerprint image data pair for model training, the fingerprint image data pair is generated in a mode of generating the countermeasure network model in two ways, and the acquisition cost of training data is reduced.
Alternatively, the two-way generation countermeasure network model can be obtained through training. Briefly, a GAN generally comprises a generation network and an authentication network. The generation network is used to generate (fake) data. The authentication network is used to authenticate the data generated by the generator. Each time the generating network generates a batch of data, the data is sent to an authentication network for training, the trained authentication network authenticates the true and false, and the result is fed back to the generator. The feedback results are used to assist in retraining the generating network to make the generating network more realistic and then to send the data to the authentication network for training and authentication. And the method is repeated in a circulating way until the generated data authentication network is not distinguished, namely no effective information is fed back to the generation network to train, and the training is stopped.
Alternatively, the present application generates the above-described fingerprint image data pairs using a two-way generation challenge network model (TwoWayGAN). By TwoWayGAN, a plurality of fingerprint image data pairs can be generated.
The TwoWayGAN model may be trained by artificially selected high quality (good) fingerprint data and low quality (bad) fingerprint data, for example. The high quality fingerprint data and the low quality fingerprint data may be considered as two sets (which may be denoted as set a and set B, respectively). TwoWayGAN consists of two generators (denoted GA and GB, respectively) for domain conversion and two discriminators (denoted DA and DB, respectively). For the set A, the estimation result of the set B can be generated through GB first, and then the estimation result of the set B is input into GA to obtain the estimation result of the set A; for the set B, the GA can be used for generating the estimation result of the set A, and then the estimation result of the set A is input into the GB to obtain the estimation result of the set B. Two generators GA and GB may be used to generate pairing data. In the embodiment of the application, the high-quality fingerprint data and the low-quality fingerprint data of the pairing can be obtained based on the TwoWayGAN. Specifically, after the high-quality fingerprint data is input to the TwoWayGAN model, the corresponding low-quality fingerprint data may be generated, thereby obtaining a fingerprint image data pair.
Step 402, inputting a third fingerprint into a training network of the first repair model in the model training process, and obtaining a first output result. In other words, the first output result means: and inputting the third fingerprint into a training network of the first repair model during model training.
In an embodiment of the present application, the training network of the first repair model may be a neural network. It should be noted that, in the embodiment of the present application, a Global Average Pooling (GAP) layer and a scaling layer are added to the neural network. Fig. 6 is a schematic diagram of a network design of a neural network according to an embodiment of the present application. As shown in fig. 6, the neural network may include: an input layer, a downsampling layer, a global averaging pooling layer, an upsampling layer, a scale scaling layer, and an output layer.
The input layer is used for inputting an image and carrying out convolution pretreatment on the input image. The downsampling layer is used for downsampling fingerprint images so as to extract the characteristics of the images. The GAP layer is used for fusing the characteristic information of the fingerprint image. The upsampling layer is used to restore the dimensions of the image (or to restore the resolution of the image). The scaling layer is used for scaling the feature map of the fingerprint image to suppress unimportant features. The output layer is used for performing post-processing (such as convolution processing) and outputting training results (or outputting the repaired fingerprint image).
Taking a process of inputting a third fingerprint into the training network of the first repair model to obtain a first output result as an example, the following is described: firstly, inputting an image of a third fingerprint by using an input layer, and carrying out convolution preprocessing on the image of the third fingerprint to obtain a preprocessed feature map; then, the down-sampling layer is utilized to carry out down-sampling treatment on the preprocessed feature map to obtain a feature map (featuremap) after the down-sampling treatment, and the GAP layer is utilized to average the features of each channel of the feature map after the down-sampling treatment to obtain an average value of each channel; in addition, the average value of each channel is combined with a first feature map by utilizing an up-sampling layer to perform deconvolution operation, so as to obtain an up-sampled feature map, and the obtained up-sampled feature map is consistent with the resolution of the pre-processed feature map, wherein the first feature map is: the up-sampling layer is connected with the output of the sub-module with the same resolution in the down-sampling layer through the jump layer; scaling the weighted feature map by using a scaling layer to obtain a repaired image, wherein the repaired image is a first output result, and the weighted feature map refers to: weighting the feature map after up-sampling and the preprocessed feature map to obtain a feature map; finally, the first output result is output through the output layer.
The weighting processing of the up-sampled feature map and the preprocessed feature map comprises the following steps: multiplying the up-sampled feature map by a weight value 1 to obtain a result 1, multiplying the preprocessed feature map by a weight value 2 to obtain a result 2, and finally summing the result 1 and the result 2 to obtain a weighted feature map.
It should be understood that the third fingerprint is only described herein as an example, but the embodiment of the present application is not limited thereto. In fact, during model training, other fingerprints in the fingerprint image data pair may be utilized for training.
Based on the above procedure, a person skilled in the art may know the training procedure of the first repair model according to the embodiment of the present application.
As will be appreciated by those skilled in the art, convolution preprocessing refers to a series of operations performed on each pixel in an image using a convolution kernel. In short, downsampling may be understood as the process of scaling an image. Upsampling can be understood as the process of magnifying an image.
Compared with the traditional neural network design, the GAP layer is added when the training network is designed, and the parameter number can be effectively reduced through the addition of the GAP layer. And, through newly increasing scale layer and GAP layer, help carrying on the depth fusion to the information.
Step 403, determining a plurality of loss functions based on the first output result and the fourth fingerprint. In other words, the plurality of loss functions is determined based on the first output result and the fourth fingerprint. The plurality of loss functions includes at least a loss function generated based on global features.
In some possible examples, the loss function generated based on the global features includes, but is not limited to, one or more of the following: total Variation (TV) loss function, image structure multiscale SSIM loss function, gradient loss function, generate loss function.
The plurality of penalty functions may include the penalty functions generated based on global features described above. Optionally, the plurality of loss functions may further include a characteristic loss function VGG, a norm loss function.
For example, the plurality of loss functions includes: TV loss function, SSIM loss function, gradient loss function, norm loss function, style loss VGG, content loss VGG.
For another example, the plurality of loss functions includes: TV loss function, SSIM loss function, norm loss function, etc.
It should be appreciated that the above examples of multiple loss functions are merely exemplary descriptions and embodiments of the present application are not limited in this regard.
The TV loss function can effectively suppress high frequency noise so that the image becomes smoother.
A possible implementation, the TV loss function satisfies the following equation:
wherein L is TV Is a TV loss function, F w (I s ) Representing the first output result, C representing the length of the image, H representing the height of the image, W representing the width of the image,representing the operation of the square of the 2 norms. />Representing the gradient vector in the X direction. />Representing the gradient vector in the y-direction, x and y representing the pixel values of the image.
In one possible implementation, the L1 loss function satisfies the following equation:
L `1 =||I t -F w (I s )|| 1
wherein L is `1 Representing a norm loss function, I t An image representing a fourth fingerprint F w (I s ) Represents the first output result 1 Representing the 1-norm operation.
The SSIM loss function is used to measure the similarity of two images.
In one possible implementation, the SSIM loss function satisfies the following equation:
wherein SSIM (x, y) represents the image structure multiscale loss function, μ x Is the average value of x, mu y Is the average value of y and is,is the variance of x>Is the variance of y, sigma xy Is the covariance of x and y, c 1 And c 2 Is two variables (or parameters) that remain stable, (x, y) represents the pixel coordinates of the image.
The VGG loss function is essentially a measure of the feature similarity of the image. Alternatively, the feature loss may include a content loss and a style loss.
In one possible implementation, the VGG loss function may satisfy the following equation:
wherein L is VGG Representing the VGG penalty function, j representing the convolutional layer, the value of j may be 1,3,5.C (C) j Representing the length, H, of the j-th layer feature image j High and W representing characteristic image of j-th layer j Representing the width, phi of the j-th layer feature image j (I t ) Representing the jth convolutional layerFeature map F of (1) w (I s ) A first output result is indicated as such,representing the operation of the square of the 2 norms.
The generation loss function is used to measure the loss of pairs of fingerprint image data generated based on the data generation model. Illustratively, when the data generation model is a GAN model, generating the loss function refers to a GAN loss function. The GAN loss function is used to measure discrimination loss of the paired pictures generated by the GAN model. Minimizing the GAN loss function may make the generated fingerprint image more realistic.
In one possible implementation, the GAN loss function satisfies the following equation:
L GAN =-∑log D(I t ,F w (I s ))
wherein L is GAN Represents the GAN loss function, D represents the discriminant in the TwoWayGAN model, F w (I s ) Representing the first output result, I t An image representing a fourth fingerprint.
Gradient loss refers to the sum of gradients of the image in the X-direction and the Y-direction.
In one possible implementation, the gradient loss function satisfies the following equation:
L gradient of =Δx+Δy
Wherein L is Gradient of Represents the gradient loss function, Δx represents the gradient in the x-direction, and Δy represents the gradient in the y-direction.
In addition, a total loss function is finally obtained by the plurality of loss functions, and the total loss function is defined as:
L=p1*L TV +p2*L `1 +p3*L GAN +p4*L VGG +p5*SSIM(x,y)
wherein coefficients (including: p1, p2, p3, p4, p 5) preceding each loss function represent weight values of each loss function. The weight value may be a value based on a priori value, which is not particularly limited in the embodiments of the present application.
The total loss function can be obtained based on the plurality of loss functions. And determining whether to end the training process by judging the output value of the total loss function. Optionally, model training is ended when the output value of the total loss function is stable and relatively small.
"stable" means that the output value is no longer changing, or that the magnitude of the change is relatively small. In some possible embodiments, the output value of the total loss function may be considered stable when the change value of the output value is less than a certain change threshold.
The magnitude of the output value of the total loss function can also be determined by a threshold value. In some possible embodiments, training is ended when the output value of the total loss function is less than a certain threshold.
Step 404, training through the multiple loss functions to obtain the first repair model.
That is, a plurality of loss functions are considered in the model training process, the image after fingerprint restoration can be better evaluated, and the accuracy of the obtained first restoration model is higher, so that the quality of the fingerprint image after subsequent restoration through the first restoration model is better.
From the above, the fingerprint repair model according to the embodiment of the application considers various losses in the training process, and adds deep fusion. Compared with the existing fingerprint repair model which only considers the root mean square error (root mean square error, RMSE) of local differences, the fingerprint repair model training method and device disclosed by the embodiment of the application have the advantages of fewer parameters, faster running time and higher precision. In addition, the false rejection rate (false rejection rate, FRR) of the trained fingerprint repair model of the embodiment of the application is lower as shown by simulation comparison. FRR is one of the most important indicators characterizing the performance of biometric systems.
It should be understood that the training process of the first repair model may be performed on a platform such as a server, a server cluster, or a cloud server, or may be performed in an electronic device, or may be performed on another platform (such as various simulation software platforms), which is not limited in this embodiment of the present application.
As a possible implementation manner, if the training process of the first repair model is performed on the platform, the trained first repair model may be preset in the electronic device after the training is finished. Thus, when the fingerprint is repaired, the electronic equipment can directly call the first repair model to repair the fingerprint.
FIG. 7 illustrates an exemplary graph of the effect contrast of a fingerprint repair image. An exemplary view of a fingerprint image prior to fingerprint restoration is shown in fig. 7 (1). The fingerprint image shown in (1) in fig. 7 is restored by using the first restoration model according to the embodiment of the present application, and then the fingerprint image shown in (2) in fig. 7 is obtained. The fingerprint image shown in (3) of fig. 7 is restored by using the first restoration model according to the embodiment of the present application, and then the fingerprint image shown in (4) of fig. 7 is obtained. As can be seen from fig. 7, the restored fingerprint image is smoother and clearer and noise is removed as compared to the fingerprint image before restoration. Fingerprint identification is carried out by utilizing the repaired fingerprint image, so that the success rate of fingerprint identification can be greatly improved.
Fig. 8 is a schematic diagram of an application scenario in which an embodiment of the present application is applied. As shown in fig. 8 (1), the user clicks the screen to unlock, the fingerprint unlock fails, and the interface is displayed as shown in fig. 8 (2), and the user is prompted to window 11 "fingerprint match fails, please keep the finger and sensor clean". After the fingerprint identification method provided by the embodiment of the application is adopted, the fingerprint collected by the mobile phone can be repaired through the first repair model. As shown in (3) of fig. 8, when the user clicks the screen to unlock, the mobile phone may repair the fingerprint that failed to be identified through the first repair model. The repaired fingerprint is utilized to unlock, so that the unlocking can be successful, the interface is displayed as shown in (4) in fig. 8, and the interface after the fingerprint unlocking is successful is displayed in the interface.
It should be understood that the application scenario in fig. 8 is merely for the convenience of understanding by those skilled in the art, and is not intended to limit the embodiments of the present application to the specific scenario illustrated.
The fingerprint identification method provided by the embodiment of the application is described in detail above with reference to fig. 1 to 8. An embodiment of the device of the present application will be described in detail below with reference to fig. 9. It should be understood that the fingerprint recognition device according to the embodiment of the present application may perform the various fingerprint recognition methods according to the foregoing embodiments of the present application, that is, the following specific working processes of various products may refer to the corresponding processes in the foregoing method embodiments.
Fig. 9 is a schematic block diagram of a fingerprint recognition device 900 according to an embodiment of the present application.
It should be appreciated that the apparatus 900 may perform the method of fingerprint identification shown in fig. 4-8. The fingerprint recognition apparatus 900 includes: an input unit 910 and a processing unit 920. In one possible example, apparatus 900 may be a terminal device.
In one example, the input unit 910 is configured to receive a first operation of a user, where the first operation is used to trigger fingerprint identification;
the processing unit 920 is configured to respond to the first operation, call a fingerprint acquisition module to acquire a first fingerprint, and perform fingerprint identification based on the first fingerprint;
The processing unit 920 is further configured to repair the first fingerprint by using a first repair model when fingerprint identification fails, so as to obtain a second fingerprint;
the first repair model is obtained by model training based on fingerprint image data pairs, a plurality of loss functions are determined in the model training process, the loss functions comprise loss functions generated based on global features, the fingerprint image data pairs comprise data of a third fingerprint and a fourth fingerprint with a corresponding relation, and the quality of the fourth fingerprint is better than that of the third fingerprint;
the processing unit 920 is further configured to perform fingerprint identification based on the second fingerprint.
Optionally, as a possible implementation manner, the training network of the first repair model includes a global averaging pooling layer and a scaling layer, where the global averaging pooling layer is used for fusing feature information of the fingerprint image, and the scaling layer is used for scaling a feature map of the fingerprint image.
Optionally, as a possible implementation manner, the plurality of loss functions are determined based on a first output result and the fourth fingerprint, where the first output result is: and inputting the third fingerprint into a training network of the first repair model during model training.
Optionally, as one possible implementation, the plurality of loss functions includes one or more of the following loss functions: overall change loss function, image structure multiscale loss function, gradient loss function, generate loss function.
Alternatively, as one possible implementation, the overall change loss function satisfies the following equation:
wherein L is TV Is a TV loss function, F w (I s ) Representing the first output result, C representing the length of the image, H representing the height of the image, W representing the width of the image,representing the operation of the square of the 2 norms. />Representing the gradient vector in the X direction. />Representing the gradient vector in the y-direction.
Optionally, as a possible implementation manner, the image structure multiscale loss function satisfies the following formula:
wherein SSIM (x, y) represents the image structure multiscale loss function, μ x Is the average value of x, mu y Is the average value of y and is,is the variance of x>Is the variance of y, sigma xy Is the covariance of x and y, c 1 And c 2 Is two variables that remain stable.
Alternatively, as one possible implementation, the gradient loss function satisfies the following equation:
L gradient of =Δx+Δy
Wherein L is Gradient of Represents the gradient loss function, Δx represents the gradient in the x-direction, and Δy represents the gradient in the y-direction.
Optionally, as a possible implementation manner, the generating loss function satisfies the following formula:
L GAN =-∑log D(I t ,F w (I s ))
wherein L is GAN Representing the GAN loss function, D representing the arbiter in the generative model, F w (I s ) Representing the first output result, I t An image representing a fourth fingerprint.
Optionally, as one possible implementation manner, the plurality of loss functions further includes one or more of the following loss functions: a norm loss function, a feature loss function.
Optionally, as a possible implementation manner, the fingerprint image data pair is obtained through a two-way generation anti-network degradation model, wherein the two-way generation anti-network degradation model is trained based on training data.
It should be appreciated that the apparatus 900 described above is embodied in the form of functional units. The term "unit" herein may be implemented in the form of software and/or hardware, to which embodiments of the application are not limited in particular.
For example, a "unit" may be a software program, a hardware circuit or a combination of both that implements the functions described above. The hardware circuitry may include (ASIC) application specific integrated circuits, electronic circuits, processors (e.g., shared, dedicated, or group processors, etc.) and memory that execute one or more software or firmware programs, integrated logic circuits, and/or other suitable devices that provide the above described functionality. In a simple embodiment, one skilled in the art will appreciate that the apparatus 900 may take the form shown in FIG. 2.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The application also provides a computer program product which, when executed by a processor, implements the method of any of the method embodiments of the application.
The computer program product may be stored in a memory and eventually converted to an executable object file that can be executed by a processor through preprocessing, compiling, assembling, and linking.
The application also provides a computer readable storage medium having stored thereon a computer program which when executed by a computer implements the method according to any of the method embodiments of the application. The computer program may be a high-level language program or an executable object program.
The computer readable storage medium may be volatile memory or nonvolatile memory, or may include both volatile memory and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely one association relationship describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. For example, A/B may represent A or B.
The terms "first," "second," …, etc. appearing in embodiments of the present application are for descriptive purposes only and are merely for distinguishing between different objects, such as different "fingerprints," etc. and are not to be construed as indicating or implying a relative importance or an implicit indication of the number of technical features indicated. Thus, features defining "first", "second", …, etc., may include one or more features, either explicitly or implicitly. In the description of embodiments of the application, "at least one (an item)" means one or more. The meaning of "plurality" is two or more. "at least one of (an) or the like" below means any combination of these items, including any combination of a single (an) or a plurality (an) of items.
For example, items appearing similar to "in embodiments of the application include at least one of: the meaning of the expressions a, B, and C "generally means that the item may be any one of the following unless otherwise specified: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a, B and C; a and A; a, A and A; a, A and B; a, a and C, a, B and B; a, C and C; b and B, B and C, C and C; c, C and C, and other combinations of a, B and C. The above is an optional entry for the item exemplified by 3 elements a, B and C, when expressed as "the item includes at least one of the following: a, B, … …, and X ", i.e. when there are more elements in the expression, then the entry to which the item is applicable can also be obtained according to the rules described above.
In summary, the foregoing description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A method of fingerprint identification, comprising:
receiving a first operation of a user, wherein the first operation is used for triggering fingerprint identification;
Responding to the first operation, acquiring a first fingerprint, and carrying out fingerprint identification based on the first fingerprint;
when fingerprint identification fails, repairing the first fingerprint by using a first repairing model to obtain a second fingerprint;
the first repair model is obtained by model training based on fingerprint image data pairs, a plurality of loss functions are determined in the model training process, the loss functions comprise loss functions generated based on global features, the fingerprint image data pairs comprise data of a third fingerprint and a fourth fingerprint with a corresponding relation, and the quality of the fourth fingerprint is better than that of the third fingerprint;
and fingerprint identification is carried out based on the second fingerprint.
2. The method of claim 1, wherein the training network of the first repair model comprises a global averaging pooling layer for fusing feature information of the fingerprint image and a scaling layer for scaling a feature map of the fingerprint image.
3. The method of claim 1 or 2, wherein the plurality of loss functions are determined based on a first output result and the fourth fingerprint, wherein the first output result is: and inputting the third fingerprint into a training network of the first repair model during model training.
4. A method according to any one of claims 1 to 3, wherein the plurality of loss functions comprises one or more of the following: overall change loss function, image structure multiscale loss function, gradient loss function, generate loss function.
5. The method of claim 4, wherein the overall change loss function satisfies the following equation:
wherein L is TV Is the overall change loss function, F w (I s ) Representing the first output result, C representing the length of the image, H representing the height of the image, W representing the width of the image,an operation representing the square of the 2 norms, +.>Represents the gradient vector in the X direction, +.>Representing the gradient vector in the y-direction.
6. The method of claim 4 or 5, wherein the image structure multiscale loss function satisfies the following equation:
wherein SSIM (x, y) represents the image structure multiscale loss function, μ x Is the average value of x, mu y Is the average value of y and is,is the variance of x>Is the variance of y, sigma xy Is the covariance of x and y, c 1 And c 2 Is two variables that remain stable.
7. The method according to any one of claims 4 to 6, wherein the gradient loss function satisfies the following formula:
L Gradient of =Δx+Δy
Wherein L is Gradient of Represents the gradient loss function, Δx represents the gradient in the x-direction, and Δy represents the gradient in the y-direction.
8. The method according to any one of claims 4 to 7, wherein the generation loss function satisfies the following formula:
L GAN =-∑logD(I t ,F w (I s ))
wherein L is GAN Representing the GAN loss function, D representing the arbiter in the generative model, F w (I s ) Representing the first output result, I t An image representing a fourth fingerprint.
9. The method of any one of claims 4 to 8, wherein the plurality of loss functions further comprises one or more of the following: a norm loss function, a feature loss function.
10. The method according to any one of claims 1 to 9, wherein the fingerprint image data pair is obtained by a two-way generation countermeasure network degradation model, wherein the two-way generation countermeasure network degradation model is trained based on training data.
11. An electronic device comprising a processor and a memory, the processor and the memory being coupled, the memory being for storing a computer program that, when executed by the processor, causes the electronic device to perform the method of any one of claims 1 to 10.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to perform the method of any of claims 1 to 10.
13. A chip comprising a processor which, when executing instructions, performs the method of any of claims 1 to 10.
CN202210412163.5A 2022-04-19 2022-04-19 Fingerprint identification method and device Pending CN116978067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210412163.5A CN116978067A (en) 2022-04-19 2022-04-19 Fingerprint identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210412163.5A CN116978067A (en) 2022-04-19 2022-04-19 Fingerprint identification method and device

Publications (1)

Publication Number Publication Date
CN116978067A true CN116978067A (en) 2023-10-31

Family

ID=88481869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210412163.5A Pending CN116978067A (en) 2022-04-19 2022-04-19 Fingerprint identification method and device

Country Status (1)

Country Link
CN (1) CN116978067A (en)

Similar Documents

Publication Publication Date Title
CN103390153B (en) For the method and system of the textural characteristics of biological characteristic validation
CN107545241A (en) Neural network model is trained and biopsy method, device and storage medium
CN111444826B (en) Video detection method, device, storage medium and computer equipment
CN108388878A (en) The method and apparatus of face for identification
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
CN112991494B (en) Image generation method, device, computer equipment and computer readable storage medium
CN112036331A (en) Training method, device and equipment of living body detection model and storage medium
WO2021078001A1 (en) Image enhancement method and apparatus
CN116152122B (en) Image processing method and electronic device
CN116048244B (en) Gaze point estimation method and related equipment
CN116311389B (en) Fingerprint identification method and device
CN112699811B (en) Living body detection method, living body detection device, living body detection apparatus, living body detection storage medium, and program product
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN113642359B (en) Face image generation method and device, electronic equipment and storage medium
CN116311388B (en) Fingerprint identification method and device
CN115580690B (en) Image processing method and electronic equipment
CN107742073A (en) Information displaying method, device, computer installation and computer-readable recording medium
CN115880786A (en) Method, device and equipment for detecting living human face based on channel attention
CN116978067A (en) Fingerprint identification method and device
CN113763517B (en) Facial expression editing method and electronic equipment
CN114399622A (en) Image processing method and related device
CN114299569A (en) Safe face authentication method based on eyeball motion
CN116978068A (en) Fingerprint identification method and device
CN116311395B (en) Fingerprint identification method and device
CN116311396B (en) Method and device for fingerprint identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination