CN117854124A - Fingerprint image processing method and device, electronic equipment and storage medium - Google Patents

Fingerprint image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117854124A
CN117854124A CN202410093038.1A CN202410093038A CN117854124A CN 117854124 A CN117854124 A CN 117854124A CN 202410093038 A CN202410093038 A CN 202410093038A CN 117854124 A CN117854124 A CN 117854124A
Authority
CN
China
Prior art keywords
image
sub
fingerprint
images
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410093038.1A
Other languages
Chinese (zh)
Inventor
陈兵
余涛
王信亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Priority to CN202410093038.1A priority Critical patent/CN117854124A/en
Publication of CN117854124A publication Critical patent/CN117854124A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Input (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

The application provides a fingerprint image processing method, a fingerprint image processing device, electronic equipment and a storage medium. The fingerprint image processing method comprises the following steps: acquiring a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of an optical fingerprint sensor at the current moment, and carrying out fusion processing on the plurality of fingerprint sub-images to obtain a fusion image; determining a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images from the fused image, and expanding the edge of the first image area according to the fused image to obtain a first expanded image; determining an input image according to at least one fingerprint sub-image, and expanding the edge of the input image according to the fusion image to obtain a second expanded image, wherein the at least one fingerprint sub-image comprises a first target sub-image; one of the first extension image and the second extension image is determined as an output image for fingerprint identification.

Description

Fingerprint image processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of optical fingerprint image processing, in particular to a fingerprint image processing method, a fingerprint image processing device, electronic equipment and a storage medium.
Background
The current light path design of some under-screen ultrathin optical fingerprint sensors generally adopts multi-angle light paths, and a plurality of pixel units arranged in an array are designed, so that the plurality of pixel units acquire fingerprint images at a plurality of different angles through multi-direction light channels, and then effective fingerprint signals can be acquired to the greatest extent. However, the adoption of a plurality of fingerprint images for comparing registered fingerprint templates leads to a reduction in recognition speed and increases storage consumption during fingerprint recognition; if only one or a few of the plurality of fingerprint images are adopted, the information in all the collected fingerprint images cannot be fully utilized, and the success rate of fingerprint identification is easily reduced. Therefore, a new solution is needed to at least partially ameliorate such problems.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide a fingerprint image processing method, apparatus, electronic device, and storage medium, so as to at least partially solve the foregoing problems.
According to a first aspect of an embodiment of the present application, there is provided a fingerprint image processing method, including:
acquiring a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of an optical fingerprint sensor at the current moment, and fusing the plurality of fingerprint sub-images to obtain a fused image;
Determining a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images from the fusion image, and expanding the edge of the first image area according to the fusion image to obtain a first expanded image;
determining an input image according to at least one fingerprint sub-image, and expanding the edge of the input image according to the fusion image to obtain a second expanded image, wherein the at least one fingerprint sub-image comprises the first target sub-image;
one of the first extended image and the second extended image is determined as an output image for fingerprint identification.
In some optional embodiments, the first target sub-image is a fingerprint sub-image with the best image quality among the plurality of fingerprint sub-images; the determining, from the fused image, a first image region corresponding to a first target sub-image location of the plurality of fingerprint sub-images, comprising: and determining a first image area corresponding to the fingerprint sub-image with the optimal image quality from the fused image.
In some alternative embodiments, the determining the output image for fingerprinting one of the first extended image and the second extended image includes: and determining one of the first extension image and the second extension image with higher image quality as the output image for fingerprint identification.
In some optional embodiments, the expanding the edge of the first image area according to the fused image to obtain a first expanded image includes: determining a second image area including the first image area from the fused image, wherein the second image area exceeds an edge of at least one side of the first image area; and cutting out the second image area from the fused image, and taking the cutting-out result as the first expansion image.
In some optional embodiments, if the four side edges of the first image area do not coincide with the edges of the fused image, the four side edges of the second image area respectively exceed the four side edges corresponding to the first image area; or if the edge of at least one side of the first image area is overlapped with the edge of the fused image, the second image area exceeds each side edge of the first image area which is not overlapped with the edge of the fused image.
In some alternative embodiments, the determining the input image from the at least one fingerprint sub-image includes: locally fusing the overlapped part between the first target sub-image and a second target sub-image in the plurality of fingerprint sub-images to obtain a target local fused image, wherein the second target sub-image is any fingerprint sub-image in the plurality of fingerprint sub-images except the first target sub-image; and splicing the input image according to the part which is not fused locally in the first target sub-image and the target local fusion image.
In some optional embodiments, the locally fusing the overlapping portion between the first target sub-image and the second target sub-image in the plurality of fingerprint sub-images to obtain a target locally fused image includes: determining target fusion weights respectively corresponding to a first target sub-image in the overlapped part and a second target sub-image in the overlapped part according to the first local image quality of the first target sub-image and the second local image quality of the second target sub-image, and if the local image quality is higher, determining the higher target fusion weights; and according to the target fusion weights respectively corresponding to the first target sub-image and the second target sub-image, carrying out weighted summation on the pixel value of the first target sub-image and the pixel value of the second target sub-image in the overlapped part so as to locally fuse the overlapped part between the first target sub-image and the second target sub-image, thereby obtaining a target local fusion image.
In some optional embodiments, the expanding the edge of the input image according to the fused image to obtain a second expanded image includes: determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area; and correspondingly filling the third image area outside the edge of the input image to form the second expansion image.
In some optional embodiments, if the first target sub-image is a fingerprint sub-image with the best image quality among the plurality of fingerprint sub-images, the second target sub-image is a fingerprint sub-image with the suboptimal image quality among the plurality of fingerprint sub-images.
In some optional embodiments, the determining an input image according to at least one fingerprint sub-image, and expanding edges of the input image according to the fused image, to obtain a second expanded image, includes: and determining the first target sub-image as the input image, and expanding the edge of the first target sub-image according to the fusion image to obtain the second expanded image.
In some optional embodiments, the expanding the edge of the first target sub-image according to the fused image to obtain the second expanded image includes: determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area; and correspondingly filling the third image area outside the edge of the first target sub-image to form the second extension image.
In some optional embodiments, the fusing the plurality of fingerprint sub-images to obtain a fused image includes: determining a quality optimal sub-image with optimal image quality in the plurality of fingerprint sub-images; for each fingerprint sub-image except the quality optimal sub-image in the plurality of fingerprint sub-images, determining the similarity of the overlapping part of the fingerprint sub-image and the quality optimal sub-image according to reference offset data, and determining the real-time offset of the current moment between the fingerprint sub-image and the quality optimal sub-image according to the similarity; and aligning the plurality of fingerprint sub-images according to the real-time offset of the current moment between each fingerprint sub-image and the quality optimal sub-image, and fusing the aligned plurality of fingerprint sub-images to obtain the fused image.
According to a second aspect of embodiments of the present application, there is provided a fingerprint image processing apparatus, comprising:
the fusion module is used for acquiring a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of the optical fingerprint sensor at the current moment, and carrying out fusion processing on the plurality of fingerprint sub-images to obtain a fusion image;
The first expansion module is used for determining a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images from the fusion image, and expanding the edge of the first image area according to the fusion image to obtain a first expansion image;
the second expansion module is used for determining an input image according to at least one fingerprint sub-image and expanding the edge of the input image according to the fusion image to obtain a second expansion image, wherein the at least one fingerprint sub-image comprises the first target sub-image;
and the determining module is used for determining one of the first extension image and the second extension image as an output image for fingerprint identification.
According to a third aspect of embodiments of the present application, there is provided an electronic device, including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; the memory is used for storing a computer program; the processor is configured to execute the method provided in the first aspect by running the computer program stored on the memory.
According to a fourth aspect of embodiments of the present application, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method as provided by the first aspect.
According to a fifth aspect of embodiments of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method as provided in the first aspect.
According to a sixth aspect of embodiments of the present application, there is provided a fingerprint identification device applied to an electronic apparatus having a display screen, the fingerprint identification device including: an optical fingerprint sensor for: imaging a plurality of light signals in different directions reflected by the finger above the display screen to acquire a plurality of fingerprint sub-images; a processing unit for performing the method as provided in the first aspect.
According to the fingerprint image processing scheme provided by the embodiment of the application, a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of an optical fingerprint sensor can be fused to obtain a fused image, a first image area corresponding to the first target sub-image position in the plurality of fingerprint sub-images is determined from the fused image, the edge of the first image area is expanded according to the fused image to obtain a first expanded image, then an input image is determined according to at least one fingerprint sub-image, the edge of the input image is expanded according to the fused image to obtain a second expanded image, the at least one fingerprint sub-image comprises the first target sub-image, and one of the first expanded image and the second expanded image is determined to be an output image for fingerprint identification. Therefore, on one hand, the scheme of the embodiment of the application can output one of the first extension image and the second extension image as the output image for fingerprint identification, so that when the optical fingerprint sensor acquires a plurality of fingerprint sub-images at a time, fingerprint identification is performed by using a single better output image, the effect of reducing storage consumption during fingerprint identification is effectively achieved, and the identification speed and the identification success rate of fingerprint identification can be effectively improved; on the other hand, as the first extension image and the second extension image are both obtained through extension of the fusion image, the fusion image better aggregates information in the collected multiple fingerprint sub-images, has a larger visual field and lower spatial noise, and therefore one of the two images is used as an output image for fingerprint identification.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present application, and other drawings may also be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 shows a flowchart of an alternative fingerprint image processing method of the present application.
Fig. 2A shows a flowchart of an optional substep of "fusing multiple fingerprint sub-images to obtain a fused image" in step S102.
Fig. 2B shows a schematic view of a multi-angle optical path of a pixel unit of an alternative optical fingerprint sensor according to the present application.
Fig. 2C shows a schematic diagram of the calculation of the real-time offset between the fingerprint sub-image and the quality optimal sub-image.
Fig. 3A shows an optional substep flow chart of "determine the similarity of the overlapping portion of the fingerprint sub-image and the quality optimal sub-image from the reference offset data, and determine the real-time offset of the current moment between the fingerprint sub-image and the quality optimal sub-image from the similarity" in substep S1022.
Fig. 3B shows an alternative substep flow chart of the way the reference offset data is updated.
Fig. 3C shows an alternative substep flow chart of the "fusing the aligned fingerprint sub-images to get a fused image" in substep S1023.
Fig. 3D shows an optional substep flow chart of "fusing the parts of the respective fingerprint sub-images within the overlap region according to the image quality of the parts of the respective fingerprint sub-images within the overlap region and the similarity of the overlapping parts of the respective fingerprint sub-images within the overlap region and the quality optimal sub-image" in substep S10231.
Fig. 3E shows a schematic diagram of fusing parts of two images.
Fig. 4A shows a flowchart of an optional substep of "expanding the edges of the first image region from the fused image to obtain a first expanded image" in step S104.
Fig. 4B shows a flowchart of an optional substep of determining an input image from at least one fingerprint sub-image in step S106.
Fig. 4C shows an alternative substep flow chart of substep S1061.
Fig. 5 shows a schematic view of the direction of expansion of the edges of the first image region of the present application in the fused image.
Fig. 6 shows a schematic diagram of a partial fusion of overlapping portions of a first target sub-image and a second target sub-image.
Fig. 7 shows a schematic diagram of an edge of an input image being extended by a fused image to obtain a second extended image.
Fig. 8 shows a block diagram of an overall implementation of an alternative fingerprint image processing scheme of the present application.
Fig. 9 shows a block diagram of an overall implementation of another alternative fingerprint image processing scheme of the present application.
Fig. 10 shows a schematic diagram of an implementation procedure of the fingerprint image processing scheme of the present application.
Fig. 11 shows a schematic diagram of an exemplary fingerprint image processing device of the present application.
Fig. 12 shows a schematic diagram of an exemplary electronic device of the present application.
Fig. 13 shows a schematic diagram of an exemplary fingerprint recognition device of the present application.
Detailed Description
In order to better understand the technical solutions in the embodiments of the present application, the following descriptions will clearly and in detail describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the embodiments of the present application shall fall within the scope of protection of the embodiments of the present application.
In the optical path design of the optical fingerprint sensor under the ultrathin screen, two extreme signal acquisition states need to be considered, the first is that the fingerprint is better attached to the scene of the screen, such as finger identification at normal temperature, at the moment, the smaller the light receiving angle of the signal acquisition optical path is, the more likely a single fingerprint ridge signal (or fingerprint valley signal) is received, the higher the signal quantity is, and the more favorable the identification is; another is a scene with poor fingerprint adhesion, such as finger recognition or light pressing at low temperature, where the light receiving angle of the signal acquisition light path is hoped to be increased, because the fingerprint is a three-dimensional surface, when light is incident at a large angle, the valley reflection light is more easily blocked by the side wall of the ridge, so that the difference of the ridge and the valley signals is increased, and the signal is increased. The actual fingerprint pressing signal is distributed between these two extremes. In order to take the above situation into consideration, the current optical path design of some under-screen ultrathin optical fingerprint sensors generally adopts a multi-angle optical path, and a plurality of pixel units arranged in an array are designed to acquire fingerprint images at different angles through multi-direction optical channels, so that effective fingerprint signals can be acquired to the greatest extent.
However, the adoption of a plurality of fingerprint images for comparing registered fingerprint templates leads to a reduction in recognition speed and increases storage consumption during fingerprint recognition; if only one or a few of the plurality of fingerprint images are adopted, the information in all the collected fingerprint images cannot be fully utilized, and the success rate of fingerprint identification is easily reduced.
In view of this, a fingerprint image processing scheme is proposed in the present application. Specific implementations of embodiments of the present application are further described below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of an alternative fingerprint image processing method of the present application. As shown in fig. 1, the fingerprint image processing method includes steps S102, S104, S106, and S108, specifically:
s102: and acquiring a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of the optical fingerprint sensor at the current moment, and fusing the plurality of fingerprint sub-images to obtain a fused image.
Specific results of the optical fingerprint sensor are not specifically limited in this application. For example, the optical fingerprint sensor may be an off-screen optical fingerprint sensor that may be used to implement an off-screen optical fingerprint recognition function of the electronic device. The optical fingerprint sensor can image a plurality of light signals in different directions reflected by a finger above a display screen of the electronic device so as to acquire a plurality of fingerprint sub-images. It should be noted that, the fingerprint image detection method of the present application may be executed by any processing unit. For example, the method may be performed by a processing unit (e.g., may include one or more chips capable of data processing, including but not limited to a CPU (Central ProcessingUnit, central processing unit), MCU (Microcontroller Unit, micro control unit), GPU (Graphic ProcessingUnit, graphics processing unit), FPGA (FieldProgrammable Gate Array ), etc.) of an electronic device (e.g., including but not limited to a mobile terminal, a computer, a fingerprint lock, etc.) to which the optical fingerprint sensor is applied, which is advantageous for improving the processing effect.
The optical fingerprint sensor in the application adopts a multi-angle optical path design, and comprises a plurality of pixel units, wherein each pixel unit can comprise one or more optical pixels, and the plurality of pixel units can comprise the same number of optical pixels. Fig. 2B shows a schematic diagram of a multi-angle optical path of a pixel unit of an alternative optical fingerprint sensor. As shown in fig. 2B, the schematic diagram shows that the optical fingerprint sensor includes a plurality of pixel units (as in the simplified example of fig. 2B, 9 pixel units are altogether, but it should be understood that in practice, the pixel units may be more), a microlens unit is disposed on the plurality of pixel units, when a finger enters a fingerprint through the optical fingerprint sensor, reflected light may be received by each pixel unit through different light paths of the microlens unit at various light receiving angles, and each pixel unit may generate a fingerprint image through the received light, so as to implement the collection of the fingerprint image by each pixel unit. The light rays of various angles, namely a plurality of light signals in different directions, are received by each pixel unit. It should be understood that the plurality of fingerprint sub-images described in step S102, that is, the fingerprint images acquired by the plurality of pixel units, respectively.
For the convenience of description of the present embodiment, the following may take 9 fingerprint sub-images acquired by 9 pixel units as an example.
Due to the multi-angle light path, a certain image offset exists between every two of the fingerprint sub-images. Because the fingerprint sub-images are offset, the area of the fusion image is larger than that of a single fingerprint sub-image, namely, the field of view of the fusion image is larger than that of the single fingerprint sub-image, so that the fusion image can aggregate the effective information of a plurality of fingerprint sub-images, and an output image for fingerprint identification is obtained through fusion image processing, so that the output image can utilize the information of the large field of view of the fusion image, and the identification field of view of fingerprint identification can be increased conveniently.
The specific implementation of step S102 is not limited in this application. Alternatively, in step S102 in the present application, the fusion process may be performed after the alignment and offset of the plurality of fingerprint sub-images, so as to obtain a fused image.
Alternatively, when the plurality of fingerprint sub-images are aligned and offset, a preset fixed image offset may be used for implementation. This is to take into account that in the optical fingerprint sensor of the aforementioned multi-path design, the directions of the multi-directional optical channels are fixed, and thus there is an approximately stable image offset between the plurality of fingerprint images acquired along the optical channels in the directions. Therefore, a method of acquiring a proper fixed image offset by a special method and presetting can be adopted so as to facilitate fusion processing after aligning and shifting a plurality of fingerprint sub-images acquired by a plurality of pixel units. And the method can be realized in other modes, and can meet the requirements.
It is also noted in this application that, because the finger is a three-dimensional surface, under the effect of factors such as the fitting state caused by the fingerprint state (such as finger dryness and finger wetness), the slight deformation caused by the pressing state (such as light pressing and heavy pressing), and the pressing effective area, the actual offset between the fingerprint images is easily caused to deviate from the fixed image offset, and then the quality of the fused image obtained after the fusion of the multiple fingerprint sub-images may be possibly degraded. To improve this, in some alternative embodiments of the present application, referring to the flowchart shown in fig. 2A, the "fusing the multiple fingerprint sub-images to obtain a fused image" in step S102 includes substeps S1021 to S1023, specifically:
s1021: and determining a quality optimal sub-image with optimal image quality in the plurality of fingerprint sub-images.
The quality-optimal sub-image in the present application is the fingerprint sub-image with the optimal image quality among the plurality of fingerprint sub-images. The quality-optimal sub-image may be determined in any suitable manner and is not limited herein. For example, any suitable algorithm may be used to calculate one or more of the indicators of Harris Response (HR), normalized gradient (SR, sobel Response), peak Signal-to-Noise Ratio (PSNR), normalized Signal quantity (NSR, normalized Stoichiometric Ratio, which may characterize the Ratio between Peak-to-Peak and local mean) of each fingerprint sub-image, and determine the image quality of a plurality of fingerprint sub-images according to the calculation result, so as to determine a quality optimal sub-image with optimal image quality. Alternatively, the fingerprint sub-image with the highest image quality score may be determined as the best quality sub-image by calculating one or more of the above-mentioned indexes and scoring the image quality of each fingerprint sub-image.
The above-mentioned index can be used to achieve the determination of the image quality because it is essentially all the sharpness of the detected banding, whereas in a small area of the sub-image of the fingerprint is essentially similar to a periodic bright and dark band signal. The clearer the strip line, the larger the normalization gradient (SR) will be, the larger the fluctuation of the maximum-minimum value difference of the neighborhood relative to the maximum or minimum value (corresponding to peak signal-to-noise ratio PSNR) will be, the characteristic value of the Harris Response (HR) will be represented with edge characteristics, the ratio of peak signal to local mean value (corresponding to normalization signal NSR) will be larger (local mean value is in linear relation with illumination intensity and exposure time), and in the same case, the good fingerprint signal will be larger, i.e. normalization signal NSR is higher. Therefore, the image quality of the fingerprint sub-image is determined through one or more indexes, the reliability of the determined sub-image with optimal quality can be ensured, and the subsequent data processing is facilitated.
In addition, in step S1021 of the present application, a plurality of fingerprint sub-images at the current time are acquired, and the sub-image with the optimal quality is determined, which is favorable for real-time accurate fingerprint identification.
S1022: for each fingerprint sub-image except for the quality optimal sub-image in the plurality of fingerprint sub-images, determining the similarity of the overlapping part of the fingerprint sub-image and the quality optimal sub-image according to the reference offset data, and determining the real-time offset of the current moment between the fingerprint sub-image and the quality optimal sub-image according to the similarity. (this substep S1022 is to be continued with the understanding of the flowchart of FIG. 2A)
There will be some offset (i.e., image offset) for every two of the plurality of fingerprint sub-images. The offset between images is in pixels. The offset may include a lateral offset in the lateral direction of the image (e.g., referred to as the x-direction, which may also be understood as the horizontal direction) and a longitudinal offset in the longitudinal direction of the image (e.g., referred to as the y-direction, which may also be understood as the vertical direction). For ease of explanation, the image is herein taken to be laterally right as the positive direction of the x-direction, laterally left as the negative direction of the x-direction, longitudinally down as the positive direction of the y-direction, and longitudinally up as the negative direction of the y-direction. Each type of offset may be hereinafter described in the form of (x, y), where x represents the lateral offset of the offsets, y represents the longitudinal offset of the offsets, x represents the lateral offset to the right in the lateral direction of the image, x represents the lateral offset to the left in the lateral direction of the image, y represents the longitudinal offset to the downward in the longitudinal direction of the image, and y represents the longitudinal offset to the downward in the longitudinal direction of the image.
For example, fig. 2C shows a schematic diagram of real-time offset calculation between a fingerprint sub-image and a quality optimal sub-image. Also shown in connection with fig. 2C are x-direction (transverse) and y-direction (longitudinal) of the image. It should be understood that the fingerprint sub-image and the quality optimal sub-image shown in fig. 2C, and the overlapping portions therebetween, are only examples and are not intended to be limiting in any way.
Optionally, the reference offset data in the present application includes reference offsets between a plurality of reference fingerprint sub-images and a reference quality optimal sub-image, where the reference quality optimal sub-image is acquired by a first pixel unit of a plurality of pixel units, the plurality of reference fingerprint sub-images are acquired by a plurality of second pixel units of the plurality of pixel units, and the plurality of second pixel units are a plurality of pixel units other than the first pixel unit of the plurality of pixel units. By means of the reference offset data, data processing can be facilitated, and similarity of overlapping portions of the fingerprint sub-images and the quality optimal sub-images can be determined conveniently.
The reference offset data may be stored in advance (may be understood as initialized reference offset data) before the fingerprint image processing method of the present application is performed for the first time, and may be acquired as needed. Alternatively, the initialization of the reference offset data may be accomplished during the mass production phase using a striped weight or a weight with a special pattern (the striped weight or weight with a special pattern may be used to simulate a finger).
For example, the determination process of the reference deviation data in the present application can be understood with reference to the following process: taking 9 pixel units of the optical fingerprint sensor in fig. 2B as an example, in a mass production stage, 9 images (for example, image 1 to image 9) of the streak weight can be acquired through 9 pixel units (for example, the pixel units 1 to 9) and then the image with the optimal image quality in the image 1 to image 9 is determined, the image with the optimal image quality is taken as a reference quality optimal sub-image (for example, image 1 is taken as an example), other 8 images can be taken as reference fingerprint sub-images (for example, image 2 to image 9), the pixel unit 1 is a first pixel unit, and the pixel units 2 to 9 are respectively a second pixel unit; then, the image offsets of the images 2 to 9 with respect to the image 1 can be directly measured to obtain 8 image offsets, and then the 8 image offsets can be used as 8 reference offsets, and the 8 reference offsets can be stored as reference offset data, and can be directly acquired when needed. It should be understood that this process is merely an example for ease of understanding and is not intended to be any limitation in this application.
The manner of calculating the similarity is not limited in this application. In some alternative embodiments, "determining the similarity of the overlapping portion of the fingerprint sub-image and the quality optimal sub-image from the reference offset data and determining the real-time offset of the current moment between the fingerprint sub-image and the quality optimal sub-image from the similarity" in step S1022 includes sub-steps S10221 and S10222, in particular:
s10221: according to the reference offset between the plurality of reference fingerprint sub-images and the reference quality optimal sub-image, calculating the current reference offset between the fingerprint sub-images and the quality optimal sub-image, and determining the real-time offset search range at the current moment according to the current reference offset.
Since the plurality of reference fingerprint sub-images and the reference quality optimal sub-image are acquired by the same plurality of pixel units as the plurality of fingerprint sub-images, when the fingerprint image processing method of the scheme is executed (for example, when fingerprint identification is performed in practice), if the quality optimal sub-image is not acquired by the first pixel unit, the current reference offset between each fingerprint sub-image and the quality optimal sub-image can be obtained through conversion by the reference offset between the plurality of reference fingerprint sub-images and the reference quality optimal sub-image.
For example, taking 9 pixel units as the first pixel unit for acquiring the sub-image with the optimal reference quality, and taking the pixel units 2-9 as the second pixel unit for acquiring the sub-image with the reference fingerprint as the example, 9 pixel units respectively acquire 9 images. For example, taking the reference offset (x 1, y 1) between the reference fingerprint sub-image acquired by the pixel unit 2 and the reference quality optimal sub-image acquired by the pixel unit 1 as an example. If the quality optimal sub-image determined in step S1021 is the fingerprint sub-image acquired by the pixel unit 2, then the current reference offset between the fingerprint sub-image acquired by the pixel unit 1 and the quality optimal sub-image acquired by the pixel unit 2 may be converted to (-x 1, -y 1); similarly, the current reference offset between the fingerprint sub-image collected by each other pixel unit and the quality optimal sub-image collected by the pixel unit 2 can also be obtained by conversion according to each reference offset. It should be understood that this is merely an example for ease of understanding and is not limiting of the present application.
After the current reference offset is calculated, a real-time offset search range at the current moment can be determined, so that the real-time offset corresponding to the fingerprint sub-image can be accurately determined in the real-time offset search range.
In some alternative embodiments, the "determining the real-time offset search range at the current time instant from the current reference offset" in sub-step S10221 may include: and determining a real-time offset search range at the current moment according to the current reference offset and the preset offset allowance.
Based on this, the above embodiments in the present application determine the real-time offset search range by the current reference offset and the preset offset margin to ensure that an appropriate real-time offset can be searched in the real-time offset search range.
For example, the preset offset margin includes a positive offset margin and a negative offset margin in the image lateral direction, and a positive offset margin and a negative offset margin in the image longitudinal direction. Alternatively, referring to the example shown in fig. 2C, the absolute values of the positive offset margin and the negative offset margin in the image lateral direction are equal, both being wx; the positive and negative offset margins along the image longitudinal direction are equal in absolute value and are wy. For example, the current reference offset may be denoted as (sx, sy), sx represents the lateral offset in the current reference offset, sy represents the longitudinal offset in the current reference offset, and in connection with fig. 2C, the determined real-time offset search range may be expressed as: sx+i, sy+j; wherein i= [ -wx, wx ], j= [ -wy, wy ]. Of course, the equality of the absolute values of the positive and negative offset margins in the transverse/longitudinal directions is a particular example, and may or may not be equal in alternative embodiments, and may be exemplified for ease of illustration hereinafter.
The corresponding range of the preset offset margin in the real-time offset search range can be understood as a neighborhood of the corresponding range of the current reference offset in the real-time offset search range. This alternative embodiment of the present application can therefore be understood as a "current reference offset + neighborhood retrieval" approach in order to ensure that the appropriate real-time offset is searched for in the real-time offset search range.
S10222: and determining the similarity of the overlapping parts of the fingerprint sub-image and the quality optimal sub-image, which correspond to all the offsets in the real-time offset search range at the current moment, and taking the offset corresponding to the maximum similarity as the real-time offset at the current moment between the fingerprint sub-image and the quality optimal sub-image. (this substep S10222 is to be continued with the understanding of the flowchart of FIG. 3A)
Within the real-time offset search range, there are multiple offsets (each including a lateral offset and a longitudinal offset), for each offset, the overlap of the corresponding fingerprint sub-image with the quality optimal sub-image may be different. The similarity of the overlapping part of the fingerprint sub-image and the quality optimal sub-image corresponding to each offset can be calculated, and the offset corresponding to the maximum similarity is used as the real-time offset of the current moment.
Alternatively, the real-time offset of the current moment between the fingerprint sub-image and the quality optimal sub-image may be calculated by:
{cx,cy}=arg_max{S(pic_A,pic_B,sx+i,sy+j),i=[-wx,wx],j=[-wy,wy]}
in the above formula, cx represents the lateral offset among the real-time offsets at the current time, and cy represents the longitudinal offset among the real-time offsets at the current time; pic_A represents the fingerprint sub-picture and pic_B represents the best quality sub-picture; s represents the similarity of the overlapped parts of pic_A and pic_B under the offset; sx represents the lateral offset in the current reference offset, sy represents the longitudinal offset in the current reference offset; -wx and wx represent negative and positive ones of the preset offset margins in the image lateral direction, respectively, -wy and wy represent negative and positive ones of the preset offset margins in the image longitudinal direction, respectively. As can be seen from the above, "sx+i, sy+j, i= [ -wx, wx ], j= [ -wy, wy ]" may also represent a real-time offset search range (as can be understood in connection with fig. 2C). The above equation can be understood that the actual offset at the current time needs to be traversed and searched within the real-time offset search range, so as to select the offset corresponding to the maximum similarity as the actual offset at the current time of the final output.
In this application, the above-mentioned similarity may be used to describe the correlation between the overlapping portions of the fingerprint sub-image and the quality-optimized sub-image, and may be implemented by using any suitable index. Alternatively, the similarity may be calculated by at least one index of the correlation NCC (Normalized Cross Correlation), the structural similarity SSIM (structural similarity), the mean-square error MSE (mean-square error), and the like. Wherein, the bigger the correlation NCC and the structural similarity SSIM are, the bigger the similarity is, and the smaller the mean square error MSE is, the bigger the similarity is.
Based on this, the present application can accurately and reliably determine the real-time offset between the fingerprint sub-image and the quality optimal sub-image by the optional manner of the sub-steps S10221 to S10222, so that the subsequent steps are convenient to realize the alignment and image fusion of a plurality of fingerprint sub-images by the reliable real-time offset, thereby being capable of more effectively adapting to the deviation brought by different fingerprint states and pressing states of the finger on the three-dimensional surface, and being more beneficial to improving the image quality of the fused image obtained after the fusion of a plurality of fingerprint sub-images, so as to facilitate the subsequent data processing by using the fused image.
In some optional embodiments, the fingerprint image processing method in the present application further includes: and updating the reference offset data according to the real-time offset of the current moment between each fingerprint sub-image and the sub-image with the optimal quality.
Because the offset between the sub-images detected by each pixel unit slightly varies with the influence of temperature on the optical fingerprint sensor, and also varies with the offset of the optical fingerprint sensor module caused by the abnormal drop (e.g. crash) of the electronic device on which the optical fingerprint sensor is mounted in extreme cases. Therefore, the reference offset data can be updated by updating the reference offset data, so that the reference offset data can be better adapted to the changes of the optical fingerprint sensor in different environments and different states, and the real-time offset determined later is more accurate and reliable, so that the further data processing is facilitated.
In some alternative embodiments, referring to the flowchart shown in fig. 3B, the manner of updating the reference offset data includes the following sub-steps S202, S204 and S206, in particular:
s202: and determining reference real-time offset between the fingerprint sub-images collected by the second pixel units and the fingerprint sub-images collected by the first pixel units in the fingerprint sub-images according to the real-time offset of the current moment between each fingerprint sub-image and the quality optimal sub-image.
As described above, the reference offset data may include reference offsets between the plurality of reference fingerprint sub-images and the reference quality optimal sub-image, and in this application, the reference offset data is updated, that is, each reference offset is updated, so that real-time offsets (that is, the reference real-time offsets) converted between the corresponding reference fingerprint sub-images and the reference quality optimal sub-image need to be calculated.
Because the sub-image with optimal reference quality is acquired by the first pixel unit, and the sub-image with reference fingerprint is acquired by the second pixel unit, each reference real-time offset can be converted through each real-time offset.
For example, taking 9 pixel units as the first pixel unit for acquiring the sub-image with the optimal reference quality, and taking the pixel units 2-9 as the second pixel unit for acquiring the sub-image with the reference fingerprint as the example, 9 pixel units respectively acquire 9 images. If the quality optimal sub-image determined in step S1021 is the fingerprint sub-image acquired by the pixel unit 2, if the real-time deviation between the fingerprint sub-image acquired by the pixel unit 1 (i.e., the first pixel unit) and the quality optimal sub-image acquired by the pixel unit 2 is (x 2, y 2), then the reference real-time deviation between the fingerprint sub-image acquired by the pixel unit 2 and the fingerprint sub-image acquired by the pixel unit 1 (i.e., the first pixel unit) is (-x 2, -y 2); similarly, the reference real-time offsets between the fingerprint sub-images collected by the other second pixel units (pixel units 3 to 9) and the fingerprint sub-images collected by the first pixel unit (pixel unit 1) may be obtained by conversion according to the real-time offsets. It should be understood that this is merely an example for ease of understanding and is not limiting of the present application.
S204: for each reference real-time offset, determining a reference offset between the reference fingerprint sub-image acquired by the corresponding second pixel unit and the reference quality optimal sub-image, and carrying out weighted summation on the reference real-time offset and the reference offset according to the updating weight. (this substep S204 is to be continued with the understanding of the flowchart of FIG. 3B)
For example, taking 9 pixel units as the first pixel unit for acquiring the sub-image with the optimal reference quality, and taking the pixel units 2-9 as the second pixel unit for acquiring the sub-image with the reference fingerprint as the example, 9 pixel units respectively acquire 9 images. For the reference real-time offset between the fingerprint sub-image collected by any one of the pixel units 2 to 8 and the fingerprint sub-image collected by the pixel unit 1, for example, taking the pixel unit 2 as an example, the reference offset between the reference fingerprint sub-image collected by the pixel unit 2 and the reference quality optimal sub-image is determined, and the reference real-time offset between the pixel unit 2 and the pixel unit 1 and the reference offset between the reference fingerprint sub-image collected by the pixel unit 2 and the reference quality optimal sub-image are weighted and summed according to the update weight, so as to obtain a weighted summation result corresponding to the pixel unit 2. Similarly, for the pixel units 3 to 9, the calculation method of the pixel unit 2 may be referred to for weighted summation, and the corresponding weighted summation results may be obtained. That is, 8 weighted summation results can be obtained for 8 reference real-time offsets corresponding to pixel units 2-9. It should be understood that this is by way of example only and is not intended as a limitation of the present application.
Optionally, in sub-step S204, the reference real-time offset sum and the reference offset may be weighted and summed according to the following two formulas to obtain a corresponding weighted and summed result:
sx(t)=(1-w)*sx(t-1)+w*Cx
sy(t)=(1-w)*sy(t-1)+w*Cy
wherein w and 1-w are used to characterize the update weights, and w e (0, 1), cx represents the lateral offset in the reference real-time offset, and sx (t) and sx (t-1) are used to represent the lateral offset in the reference offset at the two times before and after, respectively. Corresponding to the above embodiment, sx (t) may be understood as the lateral offset in the updated reference offset and sx (t-1) may be understood as the lateral offset in the pre-updated reference offset.
Where Cy represents the longitudinal offset in the reference real-time offset, and sy (t) and sy (t-1) are used to represent the longitudinal offset in the reference offset at two times before and after, respectively. Corresponding to the above embodiment, sy (t) can be understood as the longitudinal offset in the reference offset after updating, and sy (t-1) can be understood as the longitudinal offset in the reference offset before updating.
And calculating to obtain the transverse offset and the longitudinal offset in the updated reference offset through the two formulas, namely obtaining the updated reference offset between the reference fingerprint sub-image and the reference quality optimal sub-image, namely obtaining a weighted summation result.
It should be appreciated that the weighted summation by different update weights may result in different updated reference offsets. Alternatively, the update weights w and 1-w may be preset values (which may be understood as initialized update weights) before the fingerprint image processing method of the present solution is first executed.
S206: and updating the reference offset data according to the weighted summation result corresponding to each reference real-time offset. (this substep S206 is to be continued with the understanding of the flowchart of FIG. 3B)
Each weighted summation result is an updated reference offset between the reference fingerprint sub-image and the reference quality optimal sub-image, and each updated reference offset is used as updated reference offset data according to each updated reference offset, so that the updating of the reference offset data is realized.
Based on this, through the alternative implementation manners including the above substeps S202 to S206 in the present application, the reference offset data may be effectively and reliably updated, and by using the thus updated reference offset data, the change of the optical fingerprint sensor in different environments and different states may be better adapted, so that the real-time offset determined later may be more accurate and reliable, so as to facilitate further data processing.
In some optional embodiments, the fingerprint image processing method in the present application further includes: and adjusting the update weight according to at least one of the image quality of the fingerprint sub-image corresponding to the reference real-time offset and the similarity of the overlapping part of the fingerprint sub-image corresponding to the reference real-time offset and the quality optimal sub-image.
According to the method and the device, the updating weight is adjusted according to at least one of the image quality of the fingerprint sub-image corresponding to the reference real-time offset and the similarity of the overlapping part of the fingerprint sub-image corresponding to the reference real-time offset and the quality optimal sub-image, so that the reference offset data can be updated better, the reference offset data can be better adapted to the change of the optical fingerprint sensor in different environments and different states, the real-time offset determined later is more accurate and reliable, and further data processing is facilitated.
The image quality of the fingerprint sub-image can be determined by calculating one or more of the indicators of Harris response HR, normalized gradient SR, peak signal-to-noise ratio PSNR, normalized signal quantity NSR and the like, and determining the image quality of the fingerprint sub-image according to the calculation result. The similarity calculation method may be calculated by at least one similarity index selected from the group consisting of correlation NCC (Normalized Cross Correlation), structural similarity SSIM (structural similarity), and mean-square error (MSE). These relevant contents are similarly described in the foregoing, and are not repeated here.
Optionally, in some optional embodiments, the better the image quality of the fingerprint sub-image corresponding to the reference real-time offset, the higher the adjusted update weight. Thus, the reliability of the updated weight obtained by calculation after adjustment is higher.
Optionally, in other optional embodiments, the higher the similarity of the overlapping portion of the fingerprint sub-image corresponding to the reference real-time offset and the quality optimal sub-image, the higher the adjusted update weight. In this way, the reliability of the updated weight calculated after adjustment can be higher.
Optionally, in still other optional embodiments, when the similarity between the overlapping portion of the fingerprint sub-image corresponding to the reference real-time offset and the quality optimal sub-image is greater than a certain similarity threshold, the update weight may be adjusted according to the image quality of the fingerprint sub-image corresponding to the reference real-time offset. This may be specifically that the better the image quality of the fingerprint sub-image corresponding to the reference real-time offset, the higher the adjusted update weight. Thus, the reliability of the updated weight calculated after adjustment can be improved.
Optionally, in the present application, when the update weight is adjusted, the update weight may be adjusted from a preset update weight range. The update weight range may be preset as desired. For example, the update weight w may range from 0.2 to 0.7, from 0.1 to 0.6, and so on, and it is understood that the corresponding 1-w may vary with w.
S1023: and aligning the plurality of fingerprint sub-images according to the real-time offset of the current moment between each fingerprint sub-image and the sub-image with the optimal quality, and fusing the aligned plurality of fingerprint sub-images to obtain a fused image. (this substep S1023 is to be continued with the understanding of the flowchart of FIG. 2A)
Alternatively, each fingerprint sub-image except the quality optimal sub-image may be aligned with the overlapping portion between the fingerprint sub-image and the quality optimal sub-image using the real-time offset (including the lateral offset and the longitudinal offset) of the corresponding current time, so that the fingerprint sub-image and the quality optimal sub-image are aligned, and after each fingerprint sub-image and the quality optimal sub-image are aligned, alignment of a plurality of fingerprint sub-images may be achieved. And then the aligned fingerprint sub-images can be fused to obtain a fused image, the information in the collected fingerprint sub-images is better aggregated, the field of view is larger, and the spatial noise is lower, so that the fused image is conveniently used for processing to obtain an output image for fingerprint identification, and the real-time fingerprint identification is carried out.
In the application, because the offset exists between the fingerprint sub-images, the area of the fusion image is larger than that of a single fingerprint sub-image, namely, the field of view of the fusion image is larger than that of the single fingerprint sub-image, so that the fusion image is used for processing to obtain the output image for fingerprint identification, the effective information of a plurality of fingerprint sub-images can be better aggregated, the output image can utilize the information of the large field of view of the fusion image, and the identification field of view of fingerprint identification is increased.
In some alternative embodiments, referring to the flowchart shown in fig. 3C, "fusing the aligned fingerprint sub-images to obtain a fused image" in step S1023 includes sub-steps S10231 and S10232, specifically:
s10231: aiming at any overlapping area of the plurality of aligned fingerprint sub-images, according to the image quality of the local part of each fingerprint sub-image in the overlapping area and the similarity of the overlapping part of each fingerprint sub-image in the overlapping area and the quality optimal sub-image, the local part of each fingerprint sub-image in the overlapping area is fused, and a local fusion result is obtained.
In the present application, a plurality of aligned fingerprint sub-images may have a plurality of overlapping areas, and each overlapping area may include contents of a part of the plurality of fingerprint sub-images.
In substep S10231 of the present application, the fusing of the local parts of the respective fingerprint sub-images in the overlapping area is performed by the image quality of the local parts of the respective fingerprint sub-images in the overlapping area and the similarity of the overlapping parts of the respective fingerprint sub-images in the overlapping area and the quality optimal sub-image, because: the image quality of a portion of the fingerprint sub-image can describe whether the portion of the fingerprint sub-image is clear or not, the more clear the portion of the fingerprint sub-image is, the better the image quality of the portion is (it is understood that the higher the local image quality score is hereinafter); the similarity of the overlapping part of the fingerprint sub-image and the quality optimal sub-image can describe the consistency of the lines between the fingerprint sub-image and the quality optimal sub-image, and after the fingerprint sub-images are aligned, the more consistent the lines at the same position are, the higher the similarity is. Therefore, in the present application, the above dimensions are comprehensively considered when fusing the portions of the respective fingerprint sub-images in the overlapping region. Firstly, considering the local image quality, enabling an output local fusion result to contain local clear areas of all fingerprint sub-images as much as possible, and avoiding that the local fusion result of the output local fusion result is fuzzy due to the fact that more local fuzzy areas of all fingerprint sub-images enter fusion, so as to avoid the fuzzy of fusion images obtained subsequently; secondly, the similarity is considered to inhibit areas with inconsistent grain directions of grains and optimal quality sub-images in all fingerprint sub-images, so that image grain distortion and blurring of an output local fusion result are avoided, and further image grain distortion and blurring of a fusion image obtained later are avoided.
In some alternative embodiments, referring to the flowchart shown in fig. 3D, the "fusing the part of each fingerprint sub-image in the overlapping area according to the image quality of the part of each fingerprint sub-image in the overlapping area and the similarity of the overlapping part of each fingerprint sub-image in the overlapping area and the quality optimal sub-image" in step S10231 includes substeps S10231A, S10231B and S10231C, specifically:
S10231A: a local image quality score is determined for each fingerprint sub-image within the region of overlap.
Optionally, for any overlapping region, image quality scoring may be performed on a local portion of each fingerprint sub-image through one or more indexes of harris response HR, normalized gradient SR, peak signal to noise ratio PSNR, normalized signal quantity NSR, and the like, through an algorithm, so as to obtain a local image quality score of each fingerprint sub-image in the overlapping region. These relevant contents are similarly described in the foregoing, and are not repeated here.
The individual fingerprint sub-images within the overlapping region may then be ranked according to the local image quality score. Here all fingerprint sub-images (and if a quality optimal sub-image is included) within the overlap region need to be computed. For example, it may be assumed that the local image quality scores of the individual fingerprint sub-images after sorting are from high to low: q (Q) 1 、Q 2 、Q 3 、…、Q n . Correspondingly, each fingerprint sub-image corresponding to the above-described ordering may be assumed to be: p (P) 1 、P 2 、P 3 、…、P n
S10231B: and determining the fusion weight corresponding to each fingerprint sub-image in the overlapping area according to the obtained quality scores of each local image and the similarity of the overlapping parts of each fingerprint sub-image and the quality optimal sub-image in the overlapping area.
For example, the similarity of the overlapping portions of the respective fingerprint sub-images and the quality-optimal sub-image in the overlapping region (as described above, the similarity may be determined by at least one of the indexes of correlation NCC, structural similarity SSIM, mean square error MSE, etc.)The line calculation results, wherein the larger the correlation NCC and the structural similarity SSIM are, the larger the similarity is, and the smaller the mean square error MSE is, the larger the similarity is. ) Can be assumed to be S 1 、S 2 、S 3 、…、S n . It should be noted that, here, all fingerprint sub-images (including the quality optimal sub-image if any) in the overlapping region need to be calculated. Alternatively, the similarity of the quality optimal sub-image with respect to itself may be defined as 100.
Thereafter, the local image quality scores Q can be based on 1 、Q 2 、Q 3 、…、Q n And respective similarity S 1 、S 2 、S 3 、…、S n Determining the respective fingerprint sub-images P in the overlapping region 1 、P 2 、P 3 、…、P n And corresponding fusion weights. For example, corresponding to the example case of the above 9 fingerprint sub-images, if the contents of the parts of the 9 fingerprint sub-images all exist within a certain overlapping area, n=9; if t (t < 9) out of the 9 fingerprint sub-images are present within a certain overlap region, n=t.
S10231C: and carrying out weighted summation on the local pixel values of the fingerprint sub-images according to the obtained fusion weights so as to fuse the local parts of the fingerprint sub-images in the superposition area.
The local fusion result is a local image, and the pixel values of the local parts of the fingerprint sub-images can be weighted and summed through the obtained fusion weights to obtain the pixel values of the pixel points of the local fusion result, so that the local parts of the fingerprint sub-images in the overlapping area are fused, and the purpose of obtaining the local fusion result is realized.
Based on this, by the alternative implementation manner including the above substeps S10231A to S10231C, the parts of the fingerprint sub-images in the overlapping area can be fused reliably and effectively, so as to obtain a reliable local fusion result, so as to facilitate the subsequent generation of a fused image.
In some alternative embodiments, if the local image quality score corresponding to the fingerprint sub-image is higher and the similarity is greater, the fusion weight corresponding to the fingerprint sub-image is determined to be higher.
According to the method and the device, the higher fusion weight is determined for the fingerprint sub-images with higher local image quality scores and higher similarity, so that the image quality of the fused local fusion result can be effectively and better ensured, noise brought by the fingerprint sub-images with poorer local image quality in the process of fusing the local images can be weakened, and the situation that the image quality of the fused local fusion result is inferior to that of the quality optimal sub-images before fusion is avoided.
It can be understood that, in the optional embodiment of the present application, when fusing the parts of the fingerprint sub-images in the overlapping area, the output local fusion result may first include, as high weight as possible, the local clear area of each fingerprint sub-image, so as to avoid that the local fuzzy area of each fingerprint sub-image enters into the fusion with high weight to cause the output local fusion result to be fuzzy, and further avoid that the fusion image obtained subsequently is fuzzy; and secondly, the areas with inconsistent grain trend of grains and optimal quality sub-images in each fingerprint sub-image can be restrained, so that the distortion and blurring of the image grains of the output local fusion result are avoided, and the distortion and blurring of the image grains of the fusion image obtained later are further avoided.
For example, the technical effects described above can be understood with reference to fig. 3E. Fig. 3E shows a schematic diagram of fusing parts of two images. As in fig. 3E, there are shown 9 overlapping regions (a 1 and B1 overlap, a2 and B2 overlap,... The local image quality score for image a in one part of the overlap region is higher than that for image B (e.g., a1 better than B1, a4 better than B4, a5 better than B5, a7 better than B7), and the local image quality score for the part of image B in the other part of the overlap region is better than that for image a (e.g., B2 better than a2, B3 better than a3, B6 better than a6, B8 better than a8, B9 better than a 9). The image C shows images obtained by directly overlapping and fusing the image A and the image B in each superposition area, the superposition areas of the image A and the image B are directly fused by 50% of fusion weight, and the local image quality scores of the obtained image C are smaller than the image with the optimal local image quality of the corresponding superposition area. Image D shows the image obtained by fusing the part of image a and the part of image B with a higher fusion weight according to a higher local image quality score in each overlapping region (it should be noted that, here, for easier visual understanding, a special case is shown in image D, in which the fusion weight with 100% is fused with a higher local image quality score, which allows the part of image a or the part of image B with a higher local image quality score in the overlapping region to enter image D completely). It can be seen that image D is of better quality than image C. That is, the higher the local image quality score in the overlapping region, the higher the fusion weight is determined to fuse the images, so that the image quality of the fused local fusion result can be effectively and better ensured, and the technical effects can be reflected. It should be understood that the description with respect to fig. 3E is not intended as any limitation in the present application.
Alternatively, the above substep S10231C may be implemented in the present application by the following formula:
wherein BP is used for representing local fusion result, P i For representing the respective fingerprint sub-images (P 1 、P 2 、P 3 、…、P n ) Substituting the pixel values into the pixel points for calculation; q (Q) i For representing local image quality scores (Q 1 、Q 2 、Q 3 、…、Q n ),Q 1 Is the maximum of the respective local image quality scores; s is S i For representing fingerprint sub-images P i Similarity to overlapping portions of quality-optimal sub-images (S 1 、S 2 、S 3 、…、S n ),S 1 For representing local image quality score Q 1 Fingerprint sub-image P of (2) 1 A corresponding similarity; n is in the overlapping regionIs a total number of fingerprint sub-images.
As can be seen from the above formula, for any fingerprint sub-image P within the region of overlap i The fusion weight is as follows:and it can be seen that the fingerprint sub-image P i Corresponding local image quality score Q i Higher and similarity S i The larger the determined sub-image P of the fingerprint i The higher the corresponding fusion weight is, the image quality of the local fusion result obtained by fusion can be effectively ensured.
When calculated by the above formula, each fingerprint sub-image P can be obtained i Substituting the pixel values of the corresponding pixel points in the overlapping region to realize weighted summation operation, and obtaining the pixel values of the pixel points of the local fusion result, thereby realizing the local fusion of the sub-images of the fingerprints in the overlapping region.
S10232: and splicing the local fusion results corresponding to the overlapping areas of the aligned fingerprint sub-images to obtain the fusion image. (this substep S10232 is to be continued with the understanding of the flowchart of FIG. 3C)
After each local fusion result corresponding to each overlapping region is obtained, each local fusion result can be spliced into a fusion large image, and a fusion image is obtained. Then, the output image can be obtained after further processing in the following steps S104 to S108 according to the fused image, so as to be used for fingerprint identification.
Based on this, through including above-mentioned sub-step S10231 ~ S10232' S optional implementation mode, compromise the local image quality of each fingerprint sub-image of coincidence region and its similarity with the best sub-image of quality simultaneously, in order to carry out local fusion to the fingerprint sub-image in the coincidence region, make the fusion image that splices through each local fusion result more reliable, make the fusion image that obtains can assemble the effective information of a plurality of fingerprint sub-images better, and then handle through the fusion image and obtain the output image that is used for fingerprint identification, make the output image can utilize the information of the big visual field of fusion image better, make the discernment visual field of the fingerprint identification of output image great, the airspace noise is lower, the signal to noise ratio is higher, be favorable to improving discernment success rate.
In other optional embodiments, when "fusing the parts of the fingerprint sub-images in the overlapping area with respect to any overlapping area of the aligned fingerprint sub-images to obtain the local fusion result", the method may be implemented only according to the local image quality of the fingerprint sub-images in the overlapping area, or only according to the similarity of the overlapping parts of the fingerprint sub-images in the overlapping area and the quality optimal sub-image. That is, the fusion weight may only consider local image quality or similarity, and may satisfy the requirement.
According to the method, through the optional implementation modes of the substeps S1021-S1023, the alignment and the image fusion of a plurality of fingerprint sub-images can be realized by accurately determining and according to the real-time offset of each fingerprint sub-image, so that the method can be more effectively adapted to the deviation caused by different fingerprint states and pressing states of the finger on the three-dimensional surface, the quality of the fused image obtained after the fusion of the plurality of fingerprint sub-images can be improved, and the output image obtained based on the fused image processing can be conveniently obtained later; in addition, the output image for fingerprint identification is obtained by processing the fusion image obtained by fusing the plurality of fingerprint sub-images after alignment according to each real-time offset, so that the effective information of the plurality of fingerprint sub-images can be better aggregated, the output image can better utilize the information of a large visual field of the fusion image, the identification visual field of fingerprint identification of the output image is larger, the spatial noise is lower, the signal to noise ratio is higher, and the improvement of the identification success rate is facilitated.
S104: and determining a first image area corresponding to the first target sub-image position in the plurality of fingerprint sub-images from the fused image, and expanding the edge of the first image area according to the fused image to obtain a first expanded image. (this step S104 is to be continued with the understanding of the flowchart of FIG. 1)
In the application, after obtaining the fused image with a larger image size, a first image area corresponding to the position of the first target sub-image can be determined from the fused image. The first target sub-image may be any one of a plurality of fingerprint sub-images. The first image area and the first target sub-image are the same size.
And then, expanding the edge of the first image area according to the fusion image to obtain a first expanded image. The first extended image is larger in size than the first image area due to the extension, which identifies a larger field of view relative to the individual fingerprint sub-images, and the first image area.
Alternatively, the first target sub-image may be a fingerprint sub-image of the plurality of fingerprint sub-images having the best image quality. That is, the first target sub-image may be a quality optimal sub-image. Thus, the step S104 may be to determine, from the fused image, a first image area corresponding to a fingerprint sub-image having the best image quality among the plurality of fingerprint sub-images.
Based on the method, the image quality of the first expansion image obtained in the follow-up process can be effectively improved, and the image quality of the second expansion image obtained in the follow-up process of expanding the edge of the input image can be effectively improved, so that the image quality of the output image for fingerprint identification is ensured, and the identification success rate of fingerprint image identification can be effectively improved.
In other alternative embodiments, the first target sub-image may also select a sub-image of the plurality of sub-images of the fingerprint that has a sub-optimal image quality. Or other fingerprint sub-images can be adopted, so that the requirements can be met.
As already described above, the quality-optimal sub-images may be determined in any suitable manner in the present application, and are not limited thereto. For example, any suitable algorithm may be used to calculate one or more of the indicators of Harris Response (HR), normalized gradient (SR, sobel Response), peak Signal-to-Noise Ratio (PSNR), normalized Signal quantity (NSR, normalized Stoichiometric Ratio, which may characterize the Ratio between Peak-to-Peak and local mean) of each fingerprint sub-image, and determine the image quality of a plurality of fingerprint sub-images according to the calculation result, so as to determine a quality optimal sub-image with optimal image quality. Alternatively, the image quality score may be performed on each fingerprint sub-image by calculating one or more of the above-mentioned indexes, and the fingerprint sub-image with the highest image quality score may be determined as the quality optimal sub-image, so that the quality optimal sub-image may be determined as the first target sub-image. And wherein the fingerprint sub-image with the second highest image quality score is the fingerprint sub-image with the next highest image quality (e.g., hereinafter also referred to as the next highest quality sub-image). Furthermore, it has been described above that the plurality of fingerprint sub-images may be ordered according to image quality. The relevant content can be understood by referring to the foregoing, and will not be described herein.
The specific manner of "expanding the edge of the first image area according to the fused image to obtain the first expanded image" in step S104 is not limited in this application. For example, in some alternative embodiments, referring to the flow chart shown in fig. 4A, it may comprise sub-steps S1041 and S1042, in particular:
s1041: a second image region including the first image region is determined from the fused image, wherein the second image region exceeds an edge of at least one side of the first image region.
In the application, after determining the first image area from the fused image, the edge of at least one side of the first image area may be expanded to the outside, so that the first image area is expanded to a larger second image area in the fused image. In this alternative embodiment, the second image region may also be understood as an ROI (Region Of Interest ) in the fused image.
It will be appreciated that the fingerprint sub-image and the first image area each comprise four side edges. In the application, when determining the second image area including the first image area from the fused image, the second image area may be extended to one side edge or multiple side edges of the first image area as required, so long as the requirement can be satisfied.
The size of the extension may be selected to be able to meet the identification performance requirements. It should be understood that the larger the size is not necessarily, the better, but the better the requirements of the recognition success rate and the recognition speed can be satisfied at the time of image expansion. For example, for some blurred fused images that affect the recognition success rate, too large an expanded size may also lead to a texture mismatch, which in turn leads to a decrease in the recognition success rate; as another example, in general, the recognition speed of a large-sized image is slow, and thus too large an expanded size may also cause a decrease in the recognition speed. The appropriate extension size can be selected according to the actual requirements.
For example, in some alternative embodiments, if none of the four side edges of the first image region overlap the edges of the fused image, then the four side edges of the second image region respectively exceed the corresponding four side edges of the first image region.
For this case, as shown in a diagram in fig. 5, the first image area corresponding to the first target sub-image may be located in the middle of the fused image, and the four side edges of the first image area do not overlap with the edges of the fused image, so that the four side edges of the first image area may be respectively expanded in four directions (the directions indicated by arrows are the expansion directions) in the fused image, thereby determining the second image area.
For example, in alternative embodiments, if the edge of at least one side of the first image region coincides with the edge of the fused image, the second image region exceeds the edges of the first image region that do not coincide with the edge of the fused image.
For this case, referring to b and c in fig. 5, the first image region corresponding to the first target sub-image may be located at an angular position of the fused image, and two side edges thereof do not overlap with the edge of the fused image, and then the other two side edges of the first image region may be respectively extended in the fused image in two other directions (the direction of extension is shown by the arrow, as shown in the figure, b may be extended downward and rightward, and c may be extended upward and leftward), thereby determining the second image region.
Of course, when the first image region corresponding to the first target sub-image is located at one edge of the fused image, that is, when only one side edge of the first image region overlaps with one side edge of the fused image (for example, overlaps with an upper side edge of the fused image), the other three side edges of the first image region may be respectively expanded (for example, expanded downward, left, and right) in the fused image, so as to determine the second image region.
Based on the above, the above-mentioned alternative scheme can adapt to the position states of different first image areas in the fused image, effectively determine the second image area including the first image area, and effectively realize the expansion of the edges of the first image area according to the fused image.
It will be appreciated that in extending the edges of the first image region, the four side edges of the first image region may be extended up to the four side edges of the fused image, i.e. the whole of the fused image is determined as the second image region. It should also be appreciated that when the side edges of the first image region are expanded, the expansion size of each side may be the same or different, as long as the requirements are satisfied. For example, in the a-view of fig. 5, the first image region may be slightly larger in size to the left and right edges than to the upper and lower edges, although this is merely an example and not limiting of the present application.
S1042: and cutting out the second image area from the fused image, and taking the cutting result as a first extension image.
After determining the second image area including the first image area from the fused image, the second image area may be truncated from the fused image, and the truncated result is the first extension image.
In connection with fig. 5 a-c, there are shown 3 examples of the clipping of the second image area from the fused image resulting in the first extension image. It should be understood that this is not a limitation in the present application.
Based on this, by including the optional implementation manners of the above sub-steps S1041 to S1042 in this application, the edge of the first image area may be effectively expanded, so as to obtain the first expanded image, so as to facilitate the subsequent determination of the output image for fingerprint identification.
S106: and determining an input image according to at least one fingerprint sub-image, and expanding the edge of the input image according to the fusion image to obtain a second expanded image, wherein the at least one fingerprint sub-image comprises a first target sub-image. (this step S106 is to be continued with the understanding of the flowchart of FIG. 1)
In this application, the input image is determined from at least one fingerprint sub-image including the first target sub-image, which is the same size as the fingerprint sub-image. In step S106, the size of the second extension image obtained by extending the edge of the input image from the fusion image is the same as the size of the first extension image obtained in step S104.
The manner of determining the input image is not limited in this application. For example, in some alternative embodiments, referring to the flowchart shown in fig. 4B, "determining an input image from at least one fingerprint sub-image" in step S106 includes the following sub-steps S1061 and S1062, in particular:
S1061: and carrying out local fusion on the overlapped part between the first target sub-image and the second target sub-image in the plurality of fingerprint sub-images to obtain a target local fusion image.
S1062: and splicing the part which is not fused locally in the first target sub-image and the target local fusion image into an input image.
Based on the above, in the application, the target local fusion image is obtained by locally fusing the overlapped part of the first target sub-image and the second target sub-image, and the input image is spliced according to the unfused part of the first target sub-image and the target local fusion image, so that the input image can be effectively obtained, and the edge expansion can be conveniently carried out according to the input image to obtain the second expansion image. In addition, the input image is generated by adopting two fingerprint sub-images, so that the calculated amount can be reduced on the basis of ensuring the effect of the input image.
Referring to fig. 6, a schematic diagram of partially fusing overlapping portions between a first target sub-image and a second target sub-image to generate a target partially fused image and an input image is shown, where it can be seen that the input image includes a target partially fused image (i.e., a result of the partial fusion) and a portion of the first target sub-image that is not partially fused. It should be understood that this is just one example for ease of understanding.
In this application, the second target sub-image may be any one of the plurality of fingerprint sub-images except the first target sub-image, and may be selected as required.
For example, in some alternative embodiments, if the first target sub-image is a fingerprint sub-image of the plurality of fingerprint sub-images that is of the best image quality (i.e., a best quality sub-image), the second target sub-image is a fingerprint sub-image of the plurality of fingerprint sub-images that is of the suboptimal image quality (i.e., a suboptimal quality sub-image).
Based on the above, in the application, the overlapped part of the sub-image with the optimal quality and the sub-image with the suboptimal quality is locally fused, so that the image quality of the input image can be better ensured, and the image quality of a second expansion image obtained by expanding the input image is improved.
In some alternative embodiments, sub-step S1061 may be to perform local fusion after aligning the offset of the overlapping portion between the first target sub-image and the second target sub-image to obtain the target local fusion image. The method is favorable for adapting to deviation caused by different fingerprint states and pressing states of the finger on the three-dimensional surface, and is favorable for improving the effect of locally fusing the overlapped part between the first target sub-image and the second target sub-image, so that the quality of the obtained input image is ensured, and the effect of the second expanded image obtained subsequently is better.
In some alternative embodiments, referring to the flowchart shown in fig. 4C, sub-step S1061 may include sub-steps S1061A and S1061B, in particular:
S1061A: and determining target fusion weights corresponding to the first target sub-image and the second target sub-image respectively according to the first local image quality of the first target sub-image in the overlapped part and the second local image quality of the second target sub-image in the overlapped part, and if the local image quality is higher, determining the target fusion weights to be higher.
In the application, the higher the local image quality of the first target sub-image and the second target sub-image is, the higher the target fusion weight is, so that the target local fusion image obtained by the subsequent local fusion can be ensured, and the characteristics of the image with higher local image quality can be more reserved, thereby improving the local fusion effect.
Alternatively, local image quality scores may be calculated for the first target sub-image and the second target sub-image in the overlapping portion, respectively, and the calculated local image quality scores may represent the first local image quality and the second local image quality, respectively, and the target fusion weights are determined for the first local image quality and the second local image quality, respectively, according to the local image quality scores corresponding to the first local image quality and the second local image quality, respectively. The higher the local image quality score, the higher the determined target fusion weight.
It should be understood that when calculating the local image quality score, one or more indexes of harris response HR, normalized gradient SR, peak signal-to-noise ratio PSNR, normalized signal quantity NSR and the like may be used for calculation, and the relevant content is similarly described above and will not be described herein.
The target fusion weights for the first target sub-image and the second target sub-image, respectively, may be determined in any suitable manner. Alternatively, assuming that the local image quality score corresponding to the first target sub-image in the overlapping portion (i.e., the first local image quality) is a and the local image quality score corresponding to the second target sub-image in the overlapping portion (i.e., the second local image quality) is b, the target fusion weight corresponding to the first target sub-image may be determined to be: the target fusion weights corresponding to the a/(a+b) and second target sub-images are as follows: b/(a+b). This may also result in higher local image quality, higher determined target fusion weights.
S1061B: and according to the target fusion weights respectively corresponding to the first target sub-image and the second target sub-image, carrying out weighted summation on the pixel value of the first target sub-image and the pixel value of the second target sub-image in the overlapped part so as to locally fuse the overlapped part between the first target sub-image and the second target sub-image to obtain a target local fusion image.
Specifically, through the target fusion weights respectively corresponding to the first target sub-image and the second target sub-image, the pixel values of all the pixel points of the target local fusion image can be obtained by carrying out weighted summation on the pixel values of the first target sub-image and the pixel values of the second target sub-image in the overlapping part, so that local fusion is realized, and the target local fusion image is obtained.
Based on this, by including the alternative embodiments of the above sub-steps S1061A to S1061B, it is possible to effectively obtain a partial fusion of the overlapping portion between the first target sub-image and the second target sub-image to obtain a target partial fusion image, so as to determine the input image in the sub-step S1062; in addition, as the local image quality of the first target sub-image and the second target sub-image is higher, the target fusion weight is higher, the target local fusion image obtained by the subsequent local fusion can be ensured, and the characteristics of the image with higher local image quality can be more reserved, so that the effect of local fusion is improved. (similarly, the technical effects herein may also be understood in conjunction with the foregoing description of FIG. 3E, and will not be described in detail herein.)
In some optional embodiments, "expanding the edges of the input image according to the fused image to obtain a second expanded image" in step S106 includes: determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area; and correspondingly filling the third image area outside the edge of the input image to form a second extension image.
According to the method and the device, the edge of the input image can be rapidly and effectively expanded according to the fusion image by correspondingly filling the third image area except the first image area in the second image area outside the edge of the input image, so that the second expanded image is obtained, and the output image for fingerprint identification can be conveniently and subsequently determined.
For example, referring to FIG. 7, the location of an exemplary third image region in a fused image is shown. It can be seen that the third image area corresponds to the portion of the first extension image that extends the first image area, and is correspondingly filled outside the edge of the input image, so that edge extension can be implemented to obtain the second extension image. The input image is the same as the first target sub-image and the first image area size, so that the second extension image is the same as the first extension image in size.
In other alternative embodiments, step S106 includes: and determining the first target sub-image as an input image, and expanding the edge of the first target sub-image according to the fusion image to obtain a second expanded image.
According to the alternative embodiment, the first target sub-image is directly determined to be the input image, so that the input image can be effectively obtained, and the second expansion image can be obtained conveniently through edge expansion according to the input image.
Alternatively, the first target sub-image may be a fingerprint sub-image with the best image quality (i.e., a best quality sub-image) of the plurality of fingerprint sub-images. Which may be determined depending on the first target sub-image used by the first extension image.
In some optional embodiments, the "expanding the edge of the first target sub-image according to the fused image to obtain the second expanded image" includes: determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area; and correspondingly filling the third image area outside the edge of the first target sub-image to form a second extension image.
According to the method and the device, the edge of the first target sub-image (namely the input image) can be quickly and effectively expanded according to the fusion image by correspondingly filling the third image area except the first image area in the second image area outside the edge of the first target sub-image (namely the input image), so that a second expanded image is obtained, and the output image for fingerprint identification can be conveniently and subsequently determined.
For example, still referring to fig. 7, the location of an exemplary third image region in the fused image is shown. It can be seen that the third image area corresponds to a portion of the first extension image that extends the first image area, and is correspondingly filled out of the edge of the input image (the first target sub-image), so that edge extension can be achieved to obtain the second extension image. The input image is the same as the first target sub-image and the first image area size, so that the second extension image is the same as the first extension image in size.
Optionally, when the third image area is correspondingly filled out of the edge of the first target sub-image, the junction of the third image area and the edge of the first target sub-image may be fused, so as to avoid a boundary effect between the third image area and the edge of the first target sub-image.
S108: one of the first extension image and the second extension image is determined as an output image for fingerprint identification. (this step S108 is to be continued with the understanding of the flowchart of FIG. 1)
After the first and second extension images are obtained, one of the first and second extension images may be determined as an output image to facilitate fingerprint identification using the output image.
Based on this, according to the alternative embodiments of steps S102 to S108, on the one hand, one of the first extended image and the second extended image may be output as an output image for fingerprint identification, so that fingerprint identification is performed by using a single better output image when the optical fingerprint sensor collects multiple fingerprint sub-images at a time, thereby effectively achieving the effect of reducing storage consumption during fingerprint identification, and also effectively improving the identification speed and the identification success rate of fingerprint identification; on the other hand, as the first extension image and the second extension image are both obtained through extension of the fusion image, the fusion image better aggregates information in the collected multiple fingerprint sub-images, has a larger visual field and lower spatial noise, and therefore one of the two images is used as an output image for fingerprint identification.
In the present application, one of the first extension image and the second extension image may be determined as the output image using an arbitrary rule. For example, in some alternative embodiments, in step S108, the one of the first extension image and the second extension image, which is higher in image quality, may be determined as the output image for fingerprint identification. Therefore, the image quality of the output image for fingerprint identification can be better ensured, and the fingerprint identification effect is effectively improved.
Alternatively, the image quality scores corresponding to the first extended image and the second extended image may be calculated respectively in the foregoing manner (the higher the image quality score is, the higher the image quality is, and may be determined by calculating the correlation index (for example, one or more of the indexes of harris response HR, normalized gradient SR, peak signal-to-noise ratio PSNR, normalized signal quantity NSR, etc.), and similar matters have been described in the foregoing, which will not be repeated herein), so that the higher one of the image quality may be determined according to the image quality scores of the two, so as to determine the output image.
An overall implementation of an exemplary fingerprint image processing scheme in the present application is described below in conjunction with fig. 8. The scheme corresponds to the optional mode of 'fusing locally the overlapped part of the first target sub-image and the second target sub-image to obtain a target local fused image, and then splicing the target local fused image and the part which is not fused locally in the first target sub-image into an input image' (namely, the optional mode of the sub-steps S1061-S1062). Referring to fig. 8, first, a plurality of fingerprint sub-images 1 to N respectively acquired by a plurality of pixel units of an optical fingerprint sensor are acquired; then, performing image quality evaluation, sorting and calculating offset of the sub-images with optimal relative quality on the fingerprint sub-images 1 to N respectively to obtain fingerprint sub-images with the image quality ranks 1 to N; then, all fingerprint sub-images (namely fingerprint sub-images 1 to N) can be aligned and offset and fused to obtain a fused image; then taking the sub-image with the optimal quality as a first target sub-image, and expanding the edge of the first target sub-image by using the fusion image to obtain a first expanded image; taking the fingerprint sub-image with the image quality row 1 as a first target sub-image (namely, a quality optimal sub-image), aligning the offset of the overlapping part of the fingerprint sub-image with the image quality row 2 (namely, a sub-image with the image quality optimal) and then carrying out local fusion to obtain a target local fusion image, and obtaining an input image (namely, an optimal sub-optimal fusion image) based on the target local fusion image; expanding the edge of the input image according to the fusion image to obtain a second expanded image at the same position as the first expanded image; then, the image quality evaluation may be performed on the first extension image and the second extension image, and one of the output images with the optimal image quality is selected as the output image for fingerprint identification, and then fingerprint identification may be performed using the output image (fingerprint registration may be performed using it in other scenes).
Optionally, a further combination is shown in fig. 9, which illustrates an overall implementation of another example fingerprint image processing scheme. The difference between fig. 9 and fig. 8 is that: FIG. 8 is a partial fusion of the optimal sub-fingerprint sub-image using the sub-fingerprint image of image quality rank 1 (i.e., the best quality sub-image) and the sub-fingerprint image of image quality rank 2 (i.e., the sub-fingerprint image of sub-optimal image quality), and then obtaining the input image; and FIG. 9 is a view of traversing the sub-images of the fingerprints of the image quality rows 2-N, respectively performing local fusion with the sub-images with optimal quality to obtain a plurality of input images (N-1), respectively and correspondingly obtaining N-1 second extension images for the N-1 input images, performing image quality evaluation on the N-1 second extension images and the first extension images, selecting one output with optimal image quality as an output image for fingerprint identification, and then performing fingerprint identification or registration by using the output image. This implementation in fig. 9 may be applicable to situations where memory is allowed and storage pressure is low.
It should be understood that the above description with respect to fig. 8 and 9 is for ease of understanding only and is not intended to be limiting in any way in this application.
The implementation of the fingerprint image processing scheme of the present application will be described with reference to fig. 10, and the implementation process of the technical scheme of the present application may be more intuitively seen with reference to fig. 10. It should be understood that the following description with respect to fig. 10 is not intended to limit any of the present application.
Referring to fig. 10, it can be seen that the fingerprint image processing scheme of the present application includes at least two alternatives, alternative 1 is the scheme of "first local fusion, then expansion and image selection", and alternative 2 is the scheme of "not local fusion, directly expansion and image selection". For ease of representation, the "fingerprint sub-image" is represented by a "sub-graph" in FIG. 10.
Referring to fig. 10, in alternative 1:
step 1: and acquiring a plurality of fingerprint sub-images 1-N respectively acquired by a plurality of pixel units of the optical fingerprint sensor at the current moment, and fusing the plurality of fingerprint sub-images 1-N to obtain a fused image.
Step 2: determining a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images 1 to N from the fused image; and determining a second image area (the second image area exceeds at least one side edge of the first image area) comprising the first image area from the fused image, cutting out the second image area from the fused image, and taking the cutting result as a first expansion image (based on the cutting result, the edge of the first image area can be expanded according to the fused image to obtain the first expansion image).
It should be understood that, in the example of alternative 1, sub-image 1 is taken as the first target sub-image, and sub-image 1 is taken as the fingerprint sub-image (i.e., the best quality sub-image) with the best image quality among the plurality of fingerprint sub-images 1 to N.
Step 3: locally fusing the overlapped part between the first target sub-image and a second target sub-image in the plurality of fingerprint sub-images to obtain a target local fused image, and splicing the target local fused image and the part which is not locally fused in the first target sub-image into an input image (based on the target local fused image, the input image can be determined according to at least one fingerprint sub-image); and determining a second image area including the first image area from the fused image, determining a third image area except the first image area in the second image area, and correspondingly filling the third image area outside the edge of the input image to form a second expanded image (based on the second expanded image, the edge of the input image can be expanded according to the fused image, and the second expanded image is obtained).
It should be understood that, in the example of alternative 1, sub-image 1 is taken as the first target sub-image, sub-image 2 is taken as the second target sub-image, and sub-image 2 is the sub-image of the plurality of sub-images 1-N with the sub-optimal image quality.
Step 4: the higher one of the first extension image and the second extension image is determined as the output image for fingerprint identification in terms of image quality (based on which it is possible to determine the output image from the one of the first extension image and the second extension image).
Referring to fig. 10, in alternative 2:
step 1: and acquiring a plurality of fingerprint sub-images 1-N respectively acquired by a plurality of pixel units of the optical fingerprint sensor at the current moment, and fusing the plurality of fingerprint sub-images 1-N to obtain a fused image.
Step 2: determining a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images 1 to N from the fused image; and determining a second image area (the second image area exceeds at least one side edge of the first image area) comprising the first image area from the fused image, cutting out the second image area from the fused image, and taking the cutting result as a first expansion image (based on the cutting result, the edge of the first image area can be expanded according to the fused image to obtain the first expansion image).
It should be understood that, in the example of alternative 2, sub-image 3 is taken as the first target sub-image, and sub-image 3 is taken as the fingerprint sub-image (i.e., the best quality sub-image) with the best image quality among the plurality of fingerprint sub-images 1 to N. Sub-image 3 is taken as the first target sub-image herein, mainly for convenience of distinguishing from the above-mentioned alternative 1, for convenience of understanding, and does not represent a substantial difference therebetween.
Step 3: the first target sub-image is determined as an input image, a second image area including the first image area is determined from the fusion image, a third image area except the first image area in the second image area is determined, and the third image area is correspondingly filled out of the edge of the first target sub-image to form a second expansion image (based on the second expansion image, the input image can be determined according to at least one fingerprint sub-image, and the edge of the input image is expanded according to the fusion image, so that the second expansion image is obtained).
Step 4: the higher one of the first extension image and the second extension image is determined as the output image for fingerprint identification in terms of image quality (based on which it is possible to determine the output image from the one of the first extension image and the second extension image).
It should be understood that the foregoing description of the fingerprint image processing method is merely illustrative of some embodiments of the present application and is not intended to limit the embodiments of the present application in any way.
According to a second aspect of the present application, according to a second aspect of an embodiment of the application, there is provided a fingerprint image processing apparatus. Referring to fig. 11, the fingerprint image processing apparatus 1100 includes:
The fusion module 1102 is configured to acquire a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of the optical fingerprint sensor at the current moment, and perform fusion processing on the plurality of fingerprint sub-images to obtain a fusion image;
a first expansion module 1104, configured to determine a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images from the fused image, and expand an edge of the first image area according to the fused image, so as to obtain a first expanded image;
a second expansion module 1106, configured to determine an input image according to at least one fingerprint sub-image, and expand an edge of the input image according to the fused image, so as to obtain a second expanded image, where the at least one fingerprint sub-image includes the first target sub-image;
a determining module 1108 is configured to determine one of the first extended image and the second extended image as an output image for fingerprint identification.
According to the scheme provided by the embodiment of the application, on one hand, one of the first extension image and the second extension image can be output as an output image for fingerprint identification, so that fingerprint identification is performed by using a single better output image when the optical fingerprint sensor collects a plurality of fingerprint sub-images at a time, the effect of reducing storage consumption during fingerprint identification is effectively achieved, and the identification speed and the identification success rate of fingerprint identification can be effectively improved; on the other hand, as the first extension image and the second extension image are both obtained through extension of the fusion image, the fusion image better aggregates information in the collected multiple fingerprint sub-images, has a larger visual field and lower spatial noise, and therefore one of the two images is used as an output image for fingerprint identification.
In some optional embodiments, the first target sub-image is a fingerprint sub-image with the best image quality among the plurality of fingerprint sub-images; the first expansion module 1104 is specifically configured to: and determining a first image area corresponding to the fingerprint sub-image with the optimal image quality from the fused image.
In some alternative embodiments, the determining module 1108 is specifically configured to: and determining one of the first extension image and the second extension image with higher image quality as the output image for fingerprint identification.
In some alternative embodiments, the first expansion module 1104 is specifically configured to: determining a second image area including the first image area from the fused image, wherein the second image area exceeds an edge of at least one side of the first image area; and cutting out the second image area from the fused image, and taking the cutting-out result as the first expansion image.
In some optional embodiments, if the four side edges of the first image area do not coincide with the edges of the fused image, the four side edges of the second image area respectively exceed the four side edges corresponding to the first image area; or if the edge of at least one side of the first image area is overlapped with the edge of the fused image, the second image area exceeds each side edge of the first image area which is not overlapped with the edge of the fused image.
In some alternative embodiments, the first expansion module 1104 is specifically configured to: locally fusing the overlapped part between the first target sub-image and a second target sub-image in the plurality of fingerprint sub-images to obtain a target local fused image, wherein the second target sub-image is any fingerprint sub-image in the plurality of fingerprint sub-images except the first target sub-image; and splicing the input image according to the part which is not fused locally in the first target sub-image and the target local fusion image.
In some alternative embodiments, the first expansion module 1104 is specifically configured to: determining target fusion weights respectively corresponding to a first target sub-image in the overlapped part and a second target sub-image in the overlapped part according to the first local image quality of the first target sub-image and the second local image quality of the second target sub-image, and if the local image quality is higher, determining the higher target fusion weights; and according to the target fusion weights respectively corresponding to the first target sub-image and the second target sub-image, carrying out weighted summation on the pixel value of the first target sub-image and the pixel value of the second target sub-image in the overlapped part so as to locally fuse the overlapped part between the first target sub-image and the second target sub-image, thereby obtaining a target local fusion image.
In some alternative embodiments, the first expansion module 1104 is specifically configured to: determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area; and correspondingly filling the third image area outside the edge of the input image to form the second expansion image.
In some optional embodiments, if the first target sub-image is a fingerprint sub-image with the best image quality among the plurality of fingerprint sub-images, the second target sub-image is a fingerprint sub-image with the suboptimal image quality among the plurality of fingerprint sub-images.
In some alternative embodiments, the second expansion module 1106 is specifically configured to: and determining the first target sub-image as the input image, and expanding the edge of the first target sub-image according to the fusion image to obtain the second expanded image.
In some alternative embodiments, the second expansion module 1106 is specifically configured to: determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area; and correspondingly filling the third image area outside the edge of the first target sub-image to form the second extension image.
In some alternative embodiments, the fusion module 1102 is specifically configured to: determining a quality optimal sub-image with optimal image quality in the plurality of fingerprint sub-images; for each fingerprint sub-image except the quality optimal sub-image in the plurality of fingerprint sub-images, determining the similarity of the overlapping part of the fingerprint sub-image and the quality optimal sub-image according to reference offset data, and determining the real-time offset of the current moment between the fingerprint sub-image and the quality optimal sub-image according to the similarity; and aligning the plurality of fingerprint sub-images according to the real-time offset of the current moment between each fingerprint sub-image and the quality optimal sub-image, and fusing the aligned plurality of fingerprint sub-images to obtain the fused image.
The fingerprint image processing device 1100 according to the second aspect of the present application, based on the same inventive concept as the fingerprint image processing method according to the first aspect of the present application, corresponds to the corresponding fingerprint image processing method in the foregoing method embodiments, and has the beneficial effects of the corresponding fingerprint image processing method embodiments, so that the description thereof will not be repeated here. In addition, the implementation of each module in the fingerprint image processing device 300 of the present embodiment may refer to the description of the corresponding part in the foregoing fingerprint image processing method embodiment, and will not be repeated here.
According to a third aspect of embodiments of the present application, there is provided an electronic device, including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the method according to the first aspect.
Referring to fig. 12, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, and the specific embodiment of the present application does not limit the specific implementation of the electronic device. As shown in fig. 12, the electronic device 1200 may include: a processor 1202, a communication interface (Communications Interface) 1204, a memory 1206, and a communication bus 1208.
Wherein:
the processor 1202, the communication interface 1204, and the memory 1206 communicate with each other via a communication bus 1208.
A communication interface 1204 for communicating with other electronic devices or servers.
The processor 1202 is configured to execute the computer program 1210 and may specifically perform relevant steps in the above-mentioned fingerprint image processing method embodiment.
In particular, the computer program 1210 may include program code comprising computer operating instructions.
The processor 1202 may be a CPU, or GPU (Graphic ProcessingUnit, graphics processor), or specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present application. The one or more processors comprised by the smart device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 1206 for storing computer programs 1210. The memory 1206 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The computer program 1210 may include a plurality of computer instructions, and the computer program 1210 may specifically enable the processor 1202 to perform the operations corresponding to the fingerprint image processing method described in any one of the foregoing method embodiments through the plurality of computer instructions.
The specific implementation of each step in the computer program 1210 may refer to the corresponding steps and corresponding descriptions in the units in the above method embodiments, and have corresponding beneficial effects, which are not described herein. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and modules described above may refer to corresponding procedure descriptions in the foregoing method embodiments, which are not repeated herein.
According to a fourth aspect of embodiments of the present application, there is further provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the fingerprint image processing method described in any of the foregoing first aspect provides a plurality of method embodiments. The computer storage media includes, but is not limited to: a compact disk read Only (Compact Disc Read-Only Memory, CD-ROM), random access Memory (Random Access Memory, RAM), floppy disk, hard disk, magneto-optical disk, or the like.
According to a fifth aspect of embodiments of the present application, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements a fingerprint image processing method as described in any of the embodiments of the first aspect.
According to a sixth aspect of the embodiments of the present application, there is further provided a fingerprint identification apparatus applied to an electronic device having a display screen, referring to fig. 13, the fingerprint identification apparatus 1300 includes: an optical fingerprint sensor 1302 for: imaging a plurality of light signals in different directions reflected by the finger above the display screen to acquire a plurality of fingerprint sub-images; a processing unit 1304 configured to perform the fingerprint image processing method according to any one of the first aspect.
It should be understood that the optical fingerprint sensor adopts a multi-angle light path design and may include a plurality of pixel units. The relevant content of the optical fingerprint sensor has been explained in the foregoing method embodiments of the first aspect, and will not be described herein.
In addition, as described above, the processing unit may be any processing unit, for example, the processing unit may be a processing unit (for example, may include one or more chips capable of performing data processing, including but not limited to a CPU (Central ProcessingUnit, central processing unit), an MCU (Microcontroller Unit, micro control unit), a GPU (Graphic Processing Unit, a graphics processing unit), an FPGA (FieldProgrammable Gate Array, a field programmable gate array), etc.) of an electronic device (including but not limited to a mobile terminal, a computer, a fingerprint lock, etc.) to which the optical fingerprint sensor is applied, which is beneficial for improving the processing effect.
The fingerprint image processing apparatus 1100/electronic device 1200/computer storage medium/computer program product/fingerprint identification apparatus 1300 embodiments of the present application are described in detail in the foregoing fingerprint image processing method embodiments, so that the relevant content and the beneficial effects thereof can be understood by referring to the foregoing method embodiments, and will not be described herein.
In addition, it should be noted that, the information related to the user (including, but not limited to, user equipment information, user personal information, etc.) and the data related to the embodiment of the present application (including, but not limited to, sample data for training the model, data for analyzing, stored data, presented data, etc.) are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide a corresponding operation entry for the user to select authorization or rejection.
It should be noted that, according to implementation requirements, each component/step described in the embodiments of the present application may be split into more components/steps, and two or more components/steps or part of operations of the components/steps may be combined into new components/steps, so as to achieve the purposes of the embodiments of the present application. It should be understood that the various technical features in the technical solutions of the embodiments of the present application may be combined in any suitable manner.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD-ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium and to be stored in a local recording medium downloaded through a network, so that the methods described herein may be processed by such software on a recording medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware such as an application specific integrated circuit (Application Specific Integrated Circuit, ASIC) or field programmable or gate array (Field Programmable Gate Array, FPGA). It is understood that a computer, processor, microprocessor controller, or programmable hardware includes a Memory component (e.g., random access Memory (Random Access Memory, RAM), read-Only Memory (ROM), flash Memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor, or hardware, performs the methods described herein. Furthermore, when a general purpose computer accesses code for implementing the methods illustrated herein, execution of the code converts the general purpose computer into a special purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The above embodiments are only for illustrating the embodiments of the present application, but not for limiting the embodiments of the present application, and various changes and modifications can be made by one skilled in the relevant art without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also fall within the scope of the embodiments of the present application, and the scope of the embodiments of the present application should be defined by the claims.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. It should be noted that the concepts of "first", "second", etc. mentioned in the embodiments of the present application are only used to distinguish between different devices, modules or units, and are not used to define the order or interdependence of functions performed by these devices, modules or units. It should be noted that references to "one" or "a plurality" of the embodiments of the present application are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be interpreted as "one or more" unless the context clearly indicates otherwise.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the embodiments of the present application, and are not limited thereto; although the embodiments of the present application have been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (17)

1. A fingerprint image processing method, comprising:
acquiring a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of an optical fingerprint sensor at the current moment, and fusing the plurality of fingerprint sub-images to obtain a fused image;
determining a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images from the fusion image, and expanding the edge of the first image area according to the fusion image to obtain a first expanded image;
determining an input image according to at least one fingerprint sub-image, and expanding the edge of the input image according to the fusion image to obtain a second expanded image, wherein the at least one fingerprint sub-image comprises the first target sub-image;
One of the first extended image and the second extended image is determined as an output image for fingerprint identification.
2. The method of claim 1, wherein the first target sub-image is a fingerprint sub-image of the plurality of fingerprint sub-images that is of optimal image quality;
the determining, from the fused image, a first image region corresponding to a first target sub-image location of the plurality of fingerprint sub-images, comprising:
and determining a first image area corresponding to the fingerprint sub-image with the optimal image quality from the fused image.
3. The method of claim 1, wherein the determining an output image for fingerprinting of one of the first and second extended images comprises:
and determining one of the first extension image and the second extension image with higher image quality as the output image for fingerprint identification.
4. A method according to any one of claims 1-3, wherein the expanding the edges of the first image region from the fused image to obtain a first expanded image comprises:
Determining a second image area including the first image area from the fused image, wherein the second image area exceeds an edge of at least one side of the first image area;
and cutting out the second image area from the fused image, and taking the cutting-out result as the first expansion image.
5. The method of claim 4, wherein,
if the four side edges of the first image area are not overlapped with the edges of the fusion image, the four side edges of the second image area respectively exceed the four side edges corresponding to the first image area; or,
if the edge of at least one side of the first image area coincides with the edge of the fused image, the second image area exceeds the edges of the first image area, which do not coincide with the edge of the fused image.
6. A method according to any of claims 1-3, wherein said determining an input image from at least one fingerprint sub-image comprises:
locally fusing the overlapped part between the first target sub-image and a second target sub-image in the plurality of fingerprint sub-images to obtain a target local fused image, wherein the second target sub-image is any fingerprint sub-image in the plurality of fingerprint sub-images except the first target sub-image;
And splicing the input image according to the part which is not fused locally in the first target sub-image and the target local fusion image.
7. The method of claim 6, wherein the locally fusing the overlapping portion between the first target sub-image and the second target sub-image of the plurality of fingerprint sub-images to obtain a target locally fused image comprises:
determining target fusion weights respectively corresponding to a first target sub-image in the overlapped part and a second target sub-image in the overlapped part according to the first local image quality of the first target sub-image and the second local image quality of the second target sub-image, and if the local image quality is higher, determining the higher target fusion weights;
and according to the target fusion weights respectively corresponding to the first target sub-image and the second target sub-image, carrying out weighted summation on the pixel value of the first target sub-image and the pixel value of the second target sub-image in the overlapped part so as to locally fuse the overlapped part between the first target sub-image and the second target sub-image, thereby obtaining a target local fusion image.
8. The method of claim 6, wherein the expanding edges of the input image from the fused image to obtain a second expanded image comprises:
determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area;
and correspondingly filling the third image area outside the edge of the input image to form the second expansion image.
9. The method of claim 6, wherein if the first target sub-image is a fingerprint sub-image of the plurality of fingerprint sub-images that is of optimal image quality, the second target sub-image is a fingerprint sub-image of the plurality of fingerprint sub-images that is of suboptimal image quality.
10. A method according to any of claims 1-3, wherein said determining an input image from at least one fingerprint sub-image and expanding edges of said input image from said fused image results in a second expanded image, comprising:
and determining the first target sub-image as the input image, and expanding the edge of the first target sub-image according to the fusion image to obtain the second expanded image.
11. The method of claim 10, wherein the expanding the edge of the first target sub-image according to the fused image to obtain the second expanded image comprises:
determining a second image area including the first image area from the fused image, and determining a third image area except the first image area in the second image area;
and correspondingly filling the third image area outside the edge of the first target sub-image to form the second extension image.
12. A method according to any one of claims 1-3, wherein said fusing the plurality of fingerprint sub-images to obtain a fused image comprises:
determining a quality optimal sub-image with optimal image quality in the plurality of fingerprint sub-images;
for each fingerprint sub-image of the plurality of fingerprint sub-images except for the quality optimal sub-image, determining the similarity of the overlapping part of the fingerprint sub-image and the quality optimal sub-image according to reference offset data, and determining the real-time offset of the current moment between the fingerprint sub-image and the quality optimal sub-image according to the similarity;
And aligning the plurality of fingerprint sub-images according to the real-time offset of the current moment between each fingerprint sub-image and the quality optimal sub-image, and fusing the aligned plurality of fingerprint sub-images to obtain the fused image.
13. A fingerprint image processing apparatus, comprising:
the fusion module is used for acquiring a plurality of fingerprint sub-images respectively acquired by a plurality of pixel units of the optical fingerprint sensor at the current moment, and carrying out fusion processing on the plurality of fingerprint sub-images to obtain a fusion image;
the first expansion module is used for determining a first image area corresponding to a first target sub-image position in the plurality of fingerprint sub-images from the fusion image, and expanding the edge of the first image area according to the fusion image to obtain a first expansion image;
the second expansion module is used for determining an input image according to at least one fingerprint sub-image and expanding the edge of the input image according to the fusion image to obtain a second expansion image, wherein the at least one fingerprint sub-image comprises the first target sub-image;
and the determining module is used for determining one of the first extension image and the second extension image as an output image for fingerprint identification.
14. An electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor being adapted to perform the method of any of claims 1-12 by running the computer program stored on the memory.
15. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-12.
16. A computer program product comprising a computer program which, when executed by a processor, implements the method of any of claims 1-12.
17. A fingerprint recognition device for use with an electronic device having a display screen, the fingerprint recognition device comprising:
an optical fingerprint sensor for: imaging a plurality of light signals in different directions reflected by the finger above the display screen to acquire a plurality of fingerprint sub-images;
a processing unit for performing the method of any one of claims 1-12.
CN202410093038.1A 2024-01-22 2024-01-22 Fingerprint image processing method and device, electronic equipment and storage medium Pending CN117854124A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410093038.1A CN117854124A (en) 2024-01-22 2024-01-22 Fingerprint image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410093038.1A CN117854124A (en) 2024-01-22 2024-01-22 Fingerprint image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117854124A true CN117854124A (en) 2024-04-09

Family

ID=90531236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410093038.1A Pending CN117854124A (en) 2024-01-22 2024-01-22 Fingerprint image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117854124A (en)

Similar Documents

Publication Publication Date Title
US7688363B2 (en) Super-resolution device, super-resolution method, super-resolution program, and super-resolution system
US10719937B2 (en) Automated detection and trimming of an ambiguous contour of a document in an image
US6393162B1 (en) Image synthesizing apparatus
US7324701B2 (en) Image noise reduction
US9224189B2 (en) Method and apparatus for combining panoramic image
US9019426B2 (en) Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data
US8199202B2 (en) Image processing device, storage medium storing image processing program, and image pickup apparatus
CN112017135B (en) Method, system and equipment for spatial-temporal fusion of remote sensing image data
CN111083365B (en) Method and device for rapidly detecting optimal focal plane position
US20160275682A1 (en) Machine vision image sensor calibration
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
US8111877B2 (en) Image processing device and storage medium storing image processing program
JP2002514359A (en) Method and apparatus for creating a mosaic image
US20200356800A1 (en) Polygonal region detection
US20120194697A1 (en) Information processing device, information processing method and computer program product
JP7024736B2 (en) Image processing equipment, image processing method, and program
US10366497B2 (en) Image/video editor with automatic occlusion detection and cropping
US20120106783A1 (en) Object tracking method
CN111784620B (en) Light field camera full-focusing image fusion algorithm for guiding angle information by space information
CN111080669B (en) Image reflection separation method and device
CN116681636B (en) Light infrared and visible light image fusion method based on convolutional neural network
US20220148200A1 (en) Estimating the movement of an image position
CN110705634A (en) Heel model identification method and device and storage medium
US20080267506A1 (en) Interest point detection
CN117854124A (en) Fingerprint image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination