WO2021114843A1 - Procédé et appareil de traitement d'image d'empreinte digitale, dispositif électronique et support lisible par ordinateur - Google Patents

Procédé et appareil de traitement d'image d'empreinte digitale, dispositif électronique et support lisible par ordinateur Download PDF

Info

Publication number
WO2021114843A1
WO2021114843A1 PCT/CN2020/119537 CN2020119537W WO2021114843A1 WO 2021114843 A1 WO2021114843 A1 WO 2021114843A1 CN 2020119537 W CN2020119537 W CN 2020119537W WO 2021114843 A1 WO2021114843 A1 WO 2021114843A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fingerprint
shading
processed
fingerprint image
Prior art date
Application number
PCT/CN2020/119537
Other languages
English (en)
Chinese (zh)
Inventor
吴拥
吴桐
Original Assignee
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京迈格威科技有限公司 filed Critical 北京迈格威科技有限公司
Publication of WO2021114843A1 publication Critical patent/WO2021114843A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present invention relates to the technical field of image processing, in particular to a fingerprint image processing method, device, electronic equipment and computer readable medium.
  • the large-area under-screen fingerprint means that there is a large area on the screen of the mobile terminal for fingerprint collection.
  • the size of the image collected each time is the finger pressing area. Therefore, the image area collected varies with the pressing position.
  • the fingerprint image under the screen is a fingerprint image collected across the screen.
  • the ratio of the fingerprint signal in the image is about 10%, and a clean fingerprint image needs to be obtained through an image preprocessing algorithm.
  • the main algorithm is to subtract the static shading image from the collected image to obtain the fingerprint image.
  • the static shading image is a clean shading image collected in a specific state. Because in different environments, there are certain state changes between the fingers and the screen. For example, the fingerprint pattern of a dry or cold finger is flatter than that of a normal temperature finger, and the screen also has a phenomenon of thermal expansion and contraction. Therefore, subtracting the static shading image collected at room temperature from the fingerprint image collected in the dry and cold state, there is a problem that the shading cannot be completely removed.
  • the fingerprint sensor under the large-area screen is different from the common optical fingerprint sensor.
  • the fingerprint sensor under the large-area screen has a larger area, and the finger pressing area of the fingerprint sensor under the large-area screen only occupies a small area, and the latter has a smaller area. It can be covered with one finger. Based on this, because the collection areas of the static shading image and the fingerprint image are different, it is impossible to directly use the static shading image and the fingerprint image to perform shading removal processing on the fingerprint image.
  • the purpose of the present invention is to provide a fingerprint image processing method, device, electronic equipment, and computer readable medium to alleviate the shading effect of traditional shading technology for fingerprint images collected by fingerprint sensors under a large area screen. Poor technical issues.
  • an embodiment of the present invention provides a fingerprint image processing method, including: acquiring an original shading image and a fingerprint image to be processed, wherein the original shading image and the fingerprint image to be processed are the same
  • the image collected by the fingerprint sensor of, and the original shading image and the fingerprint image to be processed are images after denoising; the shading image corresponding to the fingerprint image to be processed is intercepted from the original shading image, Obtain a target shading image; combine the target shading image and the fingerprint image to be processed to determine fingerprint information in the fingerprint image to be processed, and use the fingerprint information to determine the fingerprint image after the shading is removed.
  • obtaining the original shading image includes: performing Fourier transform on the shading image collected by the sensor to obtain a target frequency domain image; using a mask image to remove high frequency components in the target frequency domain image to obtain a filtered image ; Perform inverse Fourier transform on the filtered image to obtain the original shading image.
  • using a mask image to remove high frequency components in the target frequency domain image to obtain a filtered image includes: determining a first designated area in the target frequency domain image, wherein the first designated area and the second designated area Corresponding to the designated area, the second designated area is the area in the mask image where the pixel value is the target value; the frequency value in the first designated area is set as the target value, and the image after the setting is determined as the Filter the image.
  • intercepting the shading image corresponding to the fingerprint image to be processed in the original shading image includes: determining a plurality of target coordinates in the original shading image; wherein, the target coordinates are the to-be-processed The relative coordinates of the multiple vertices of the processed fingerprint image in the original shading image, and the multiple target coordinates can determine the relative area of the fingerprint image to be processed in the original shading image; The multiple target coordinates intercept the shading image corresponding to the fingerprint image to be processed from the original shading image to obtain the target shading image.
  • acquiring a fingerprint image to be processed includes: acquiring an original fingerprint image; and using a frequency domain filtering algorithm to remove common mode noise in the original fingerprint image to obtain the fingerprint image to be processed.
  • determining the fingerprint information in the fingerprint image to be processed in combination with the target shading image and the fingerprint image to be processed includes: calculating the pixel value of the fingerprint image to be processed and the target shading image To obtain the pixel difference between the pixel values of, and determine the pixel difference as the fingerprint information.
  • determining the target fingerprint image by using the fingerprint information includes: performing image enhancement and denoising processing on the fingerprint information to obtain the target fingerprint image.
  • the method further includes: after determining the target fingerprint image by using the fingerprint information, updating the update area in the original shading image through the fingerprint image to be processed to obtain a new shading image; Perform brightness alignment processing on the new shading image to obtain an alignment processing result.
  • performing brightness alignment processing on the new shading image to obtain the alignment processing result includes: calculating the brightness average value of the updated area in the new shading image to obtain the first brightness average value; and calculating the new shading image The average brightness of the unupdated area in the image to obtain the second average brightness; calculate the ratio between the first average brightness and the second average brightness; use the ratio to perform the calculation on the new fingerprint image Brightness alignment processing, the alignment processing result is obtained.
  • using the ratio to perform brightness alignment processing on the new fingerprint image, and obtaining the alignment processing result includes: using an alignment formula Perform brightness alignment processing on the new fingerprint image to obtain the alignment processing result, where ratio is the ratio, Is the pixel value of the unupdated area in the new shading image, As the result of the alignment processing, no_update_area is the unupdated area in the new shading image.
  • an embodiment of the present invention also provides a fingerprint image processing device, including: an acquisition unit for acquiring an original shading image and a fingerprint image to be processed, wherein the original shading image and the fingerprint image to be processed The fingerprint image of is the image collected by the same fingerprint sensor, and the original shading image and the fingerprint image to be processed are the images after denoising; the interception unit is used to intercept the original shading image A shading image corresponding to the fingerprint image to be processed to obtain a target shading image; a determining unit configured to combine the target shading image and the fingerprint image to be processed to determine the fingerprint information in the fingerprint image to be processed, And use the fingerprint information to determine the fingerprint image after shading is removed.
  • an embodiment of the present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor executes the computer program When realizing the steps of the method described in any one of the above-mentioned first aspects.
  • an embodiment of the present invention also provides a computer-readable medium having non-volatile program code executable by a processor, and the program code causes the processor to execute any one of the above-mentioned first aspects. The steps of the method described.
  • the original shading image and the fingerprint image to be processed are acquired; then, the shading image corresponding to the fingerprint image to be processed is intercepted from the original shading image, so as to obtain the target shading image; finally, Combine the captured target shading image and the fingerprint image to be processed to determine the fingerprint image after shading is removed.
  • the method of removing the shading of the fingerprint image can obtain a fingerprint image that has completely removed the shading, thereby alleviating the technical problem of poor shading removal effect of the fingerprint image collected by the fingerprint sensor under the large-area screen by the traditional shading technology, thereby improving the yield The technical effect of completely removing the shading fingerprint image.
  • Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention.
  • Fig. 2 is a flowchart of a fingerprint image processing method according to an embodiment of the present invention.
  • Fig. 3 is a flowchart of another fingerprint image processing method according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of a mask image according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a shading image without removing common mode noise according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an original shading map for removing common mode noise according to an embodiment of the present invention.
  • Fig. 7 is a schematic diagram of an original fingerprint image according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a fingerprint image to be processed after common mode noise is removed according to an embodiment of the present invention.
  • Fig. 9 is a schematic diagram of a target shading image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a fingerprint image corresponding to fingerprint information according to an embodiment of the present invention.
  • Fig. 11 is a schematic diagram of a target fingerprint image according to an embodiment of the present invention.
  • Fig. 12 is a schematic diagram of a new shading image according to an embodiment of the present invention.
  • Fig. 13 is a schematic diagram of a fingerprint image processing device according to an embodiment of the present invention.
  • Fig. 14 schematically shows a block diagram of a computing processing device for executing the method according to the present invention
  • Fig. 15 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the present invention.
  • the electronic device 100 for implementing an embodiment of the present invention will be described.
  • the electronic device can be used to run the fingerprint image processing method of each embodiment of the present invention.
  • the electronic device 100 includes one or more processors 102, one or more memories 104, an input device 106, an output device 108, and a camera 110. These components pass through a bus system 112 and/or other forms of connection mechanisms. (Not shown) interconnected. It should be noted that the components and structure of the electronic device 100 shown in FIG. 1 are only exemplary and not restrictive, and the electronic device may also have other components and structures as required.
  • the processor 102 may adopt a digital signal processor (DSP, Digital Signal Processing), a field programmable gate array (FPGA, Field-Programmable Gate Array), a programmable logic array (PLA, Programmable Logic Array), and an ASIC (Application Specific).
  • DSP Digital Signal Processing
  • FPGA field programmable gate array
  • PLA programmable logic array
  • ASIC Application Specific
  • the processor 102 may be a central processing unit (CPU, Central Processing Unit) or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and may Control other components in the electronic device 100 to perform desired functions.
  • the memory 104 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 102 may run the program instructions to implement the client functions (implemented by the processor) in the embodiments of the present invention described below. And/or other desired functions.
  • Various application programs and various data such as various data used and/or generated by the application program, can also be stored in the computer-readable storage medium.
  • the input device 106 may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, and a touch screen.
  • the output device 108 may output various information (for example, images or sounds) to the outside (for example, a user), and may include one or more of a display, a speaker, and the like.
  • the camera 110 is used to collect image data, wherein the data collected by the camera is subjected to the fingerprint image processing method to obtain a fingerprint image after shading is removed.
  • an embodiment of a fingerprint image processing method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions, and, Although a logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
  • Fig. 2 is a flowchart of a fingerprint image processing method according to an embodiment of the present invention. As shown in Fig. 2, the method includes the following steps:
  • Step S202 Obtain an original shading image and a fingerprint image to be processed, where the original shading image and the fingerprint image to be processed are images collected by the same fingerprint sensor, and the original shading image and the fingerprint image are The fingerprint image to be processed is the image after denoising.
  • the original shading image and the fingerprint image to be processed are images collected by using the same fingerprint sensor, where the fingerprint sensor may be a large-area under-screen fingerprint sensor.
  • the size of the shading image collected by the fingerprint sensor under the large-area screen is larger than that of the fingerprint image.
  • the size of the shading image is 512x512 pixels
  • the size of the fingerprint image is 200x192 pixels.
  • the fingerprint sensor can also be other types of fingerprint sensors, which are not specifically limited in this embodiment.
  • the above-mentioned original shading image and fingerprint image to be processed are images after common mode denoising.
  • the method of denoising the original shading image and the fingerprint image to be processed can effectively remove the shading noise in the fingerprint image, and provide a good fingerprint image for the subsequent fingerprint recognition process, and improve the accuracy of fingerprint recognition. user experience.
  • step S204 the shading image corresponding to the fingerprint image to be processed is intercepted from the original shading image to obtain a target shading image.
  • the size of the original shading image may be 512 ⁇ 512 pixels, and the size of the fingerprint image may be 200 ⁇ 192 pixels.
  • the image in the finger pressing area (200x192 pixels in size) can be intercepted as the target shading image.
  • the image in the finger pressing area (200 ⁇ 192 pixels in size) is the image corresponding to the fingerprint image to be processed in the original shading image.
  • Step S206 Combine the target shading image and the fingerprint image to be processed to determine fingerprint information in the fingerprint image to be processed, and use the fingerprint information to determine the fingerprint image after the shading is removed.
  • the target shading image and the fingerprint image to be processed can be combined to determine the fingerprint information in the fingerprint image. From the above description, it can be seen that because the size of the shading image collected by the fingerprint sensor under the large-area screen and the fingerprint image are inconsistent, it is impossible to directly use the shading image and the fingerprint image to remove the shading of the fingerprint image. However, the size of the captured target shading image is the same as the size of the collected fingerprint image. In this case, the target shading image and the collected fingerprint image can be directly used for shading removal processing.
  • the original shading image and the fingerprint image to be processed are acquired; then, the shading image corresponding to the fingerprint image to be processed is intercepted from the original shading image, so as to obtain the target shading image; finally, Combine the captured target shading image and the fingerprint image to be processed to determine the fingerprint image after shading is removed.
  • the method of removing the shading of the fingerprint image can obtain a fingerprint image that has completely removed the shading, which alleviates the technical problem of poor shading removal effect of the fingerprint image collected by the fingerprint sensor under the large-area screen by the traditional shading technology, thereby improving the completeness.
  • the technical effect of removing the shading fingerprint image can obtain a fingerprint image that has completely removed the shading, which alleviates the technical problem of poor shading removal effect of the fingerprint image collected by the fingerprint sensor under the large-area screen by the traditional shading technology, thereby improving the completeness.
  • the technical effect of removing the shading fingerprint image can obtain a fingerprint image that has completely removed the shading, which alleviates the technical problem of poor shading removal effect of the fingerprint image collected by the fingerprint sensor under the large-area screen by the traditional shading technology, thereby improving the completeness.
  • the original shading image is first acquired.
  • the original shading image can be obtained in the following manner:
  • Step S301 Perform Fourier transform on the shading image collected by the sensor to obtain the target frequency domain image
  • Step S302 using a mask image to remove high frequency components in the target frequency domain image to obtain a filtered image
  • Step S303 Perform inverse Fourier transform on the filtered image to obtain the original shading image.
  • the frequency domain filtering algorithm is used to remove the shading collected by the sensor.
  • the common mode noise of the image results in the original shading image described above. It should be noted that, in this embodiment, the original shading image is represented by the symbol I base-denoise .
  • the specific process of the above frequency domain filtering algorithm can be described as: first, Fourier transform is performed on the shading image collected by the sensor to obtain the target frequency domain image.
  • the target frequency domain image contains a high frequency component
  • the high frequency component is located in the middle position of the target frequency domain image.
  • the mask image is used to remove the high frequency components in the target frequency domain image to obtain a filtered image. Since the high frequency component is located in the middle of the target frequency domain image, the mask image shown in FIG. 4 can be used to remove the high frequency component in the target frequency domain image.
  • Figure 5 is the shading image I base without common mode noise removed
  • Figure 6 is the original shading image I base-denoise with common mode noise removed. It can be seen by comparison that the above method is effective It removes the common mode noise in the shading image I base , and provides a good shading image for the subsequent fingerprint recognition process.
  • step S302 a mask image is used to remove high frequency components in the target frequency domain image, and the process of obtaining a filtered image is described as follows:
  • the mask image used may be the mask image shown in FIG. 4.
  • black indicates that the frequency value of the corresponding region in the frequency domain image is set to 0 (that is, the above-mentioned target value), and white indicates that the frequency value of the corresponding region remains unchanged.
  • the second designated area is the black area in FIG. 4
  • the first designated area is the area corresponding to the black area in the target frequency domain image.
  • the frequency value in the first designated area is set as the target value, and the image after the setting is determined as the filtered image.
  • the frequency value in the first designated area can be set to the target value (for example, 0) through the mask image, and the image after the setting can be determined as the filtered image.
  • step S201 acquiring a fingerprint image to be processed includes the following steps:
  • the frequency domain filtering algorithm is used to remove the common mode noise in the original fingerprint image to obtain the fingerprint image to be processed.
  • the fingerprint image collected by the fingerprint sensor is taken as the original fingerprint image I raw .
  • the frequency domain filtering algorithm can be used to remove the common mode noise in the original fingerprint image, thereby obtaining the original fingerprint image.
  • the processed fingerprint image I raw-denoise is a schematic diagram of the original fingerprint image
  • Figure 8 is a schematic diagram of the fingerprint image to be processed after removing common mode noise. It can be seen from the comparison of Figure 7 and Figure 8 that the above method is effective The common mode noise in the original fingerprint image is removed, and a good shading image is provided for the subsequent fingerprint identification process.
  • the above method effectively removes the common mode noise in the shading image I base and removes the common mode noise in the fingerprint image to be processed, and provides a good shading image for the subsequent fingerprint identification process, and improves the user Experience.
  • the shading image corresponding to the fingerprint image to be processed can be intercepted from the original shading image, so as to obtain the target background image. Pattern image.
  • step S204 intercepting the shading image corresponding to the fingerprint image to be processed from the original shading image includes the following steps:
  • Step S2041 Determine multiple target coordinates in the original shading image; wherein the target coordinates are the relative coordinates of the multiple vertices of the fingerprint image to be processed in the original shading image, and the Multiple target coordinates can determine the relative area of the fingerprint image to be processed in the original shading image.
  • the coordinates (ie, target coordinates) of multiple vertices of the fingerprint image to be processed in the original shading image can be determined.
  • the multiple vertices are vertices that can determine the size of the fingerprint image to be processed, for example, the upper left corner and the lower right corner of the fingerprint image to be processed, or the upper right corner and the lower left corner of the fingerprint image to be processed. This is not specifically limited.
  • the target coordinates can be expressed as ( x, y) and (x+192, y+200).
  • (x, y) is the coordinates of the upper left corner of the fingerprint image to be processed in the original shading image
  • (x+192, y+200) is the lower right corner of the fingerprint image to be processed in the original shading image coordinate of.
  • (x, y) is (128,137), then (x+192,y+200) is (320,337).
  • step S2042 the shading image corresponding to the fingerprint image to be processed is intercepted from the original shading image based on the multiple target coordinates to obtain the target shading image.
  • the base area corresponding to the fingerprint image to be processed can be intercepted in the original shading image according to the target coordinates to obtain I base-area , that is, the target shading image, where ,
  • the captured target shading image is shown in Figure 9.
  • the target shading image and the fingerprint image to be processed can be combined to determine the fingerprint information in the fingerprint image to be processed, and the fingerprint information can be used to determine the fingerprint after the shading is removed. image.
  • combining the target shading image and the fingerprint image to be processed to determine the fingerprint information in the fingerprint image to be processed includes the following steps:
  • the difference between the pixel value of the fingerprint image to be processed and the pixel value of the target shading image is calculated to obtain a pixel difference, and the pixel difference is determined as the fingerprint information.
  • the difference between the pixel value of the fingerprint image I raw-denoise to be processed and the pixel value of the target shading image I base-area can be calculated to obtain the pixel difference, and the pixel difference determined fingerprint information I finger, so as to determine a fingerprint image after the shading based on the fingerprint information I finger, fingerprint image is the fingerprint information I finger 10 shown in corresponding FIG.
  • the specific operation of using the fingerprint information to determine the target fingerprint image is: performing image enhancement and denoising processing on the fingerprint information I finger to obtain the target fingerprint image, which is the target fingerprint image as shown in FIG. 11. From the comparison between FIG. 10 and FIG. 11, it can be seen that by performing image enhancement processing on the fingerprint information, a clearer fingerprint image can be obtained.
  • the size of the shading image collected by the fingerprint sensor under the large-area screen and the fingerprint image are inconsistent, it is impossible to directly use the shading image and the fingerprint image to remove the shading of the fingerprint image.
  • the size of the captured target shading image is the same as the size of the collected fingerprint image.
  • the target shading image and the collected fingerprint image can be used to remove the shading, so as to solve the problem that the original shading image cannot be directly processed. The problem of image processing for shading images and fingerprint images.
  • the update area in the original shading image may also be updated through the fingerprint image to be processed, Obtain a new shading image; perform brightness alignment processing on the new shading image to obtain an alignment processing result.
  • the update area is the area where the target shading image I base-area described in the foregoing embodiment is located. Assuming that the target coordinates are the upper left corner (128,137) and the lower right corner (320,337), then the update area is the area corresponding to the upper left corner coordinates (128,137) and the lower right corner coordinates (320,337).
  • the area where the target shading image I base-area in the original shading image is located can be updated through the fingerprint image to be processed to obtain a new shading image. image.
  • the update area in the original shading image may be updated in the following manner, which specifically includes the following steps:
  • I base-denoise is the original shading
  • the pixel value of the image, I raw-denoise is the pixel value of the fingerprint image to be processed.
  • the brightness alignment processing can be performed on the new shading image in the following manner, which specifically includes the following steps:
  • the formula Calculate the average brightness of the update area in the new shading image where update_area represents the update area, V update_area represents the first average brightness, N update_area represents the total number of pixels in the update area, and I base-denoise (x,y) represents the new The pixel value of the shading image.
  • the average brightness of the unupdated area in the new shading image is calculated to obtain the second average brightness.
  • the formula Calculate the average brightness of the unupdated area where no_update_area represents the unupdated area, V no_update_area represents the second average brightness, and N no_update_area represents the total number of pixels in the unupdated area.
  • the formula Calculate the ratio between the first average brightness and the second average brightness, where ratio is the ratio.
  • the alignment formula can be used Perform brightness alignment processing on the new fingerprint image to obtain the alignment processing result, where ratio is the ratio, Is the pixel value of the unupdated area in the new shading image, As the result of the alignment processing, no_update_area is the unupdated area in the new shading image.
  • Figure 12 is a schematic diagram of the new shading image.
  • the new shading image after updating the update area in the original shading image to obtain a new shading image, the new shading image can be used as the original shading image, and after obtaining After the new fingerprint image, the method described in step S204 and step S206 is performed on the new shading image and the new fingerprint image.
  • the shading image corresponding to the new fingerprint image can be intercepted from the new shading image to obtain a new target shading image.
  • Combine the new target shading image and the new fingerprint image to determine the fingerprint information in the new fingerprint image, and use the fingerprint information to determine the fingerprint image after the shading is removed.
  • the specific execution process is the same as the process described above, and will not be repeated here. Go into details.
  • the original shading image and fingerprint image after denoising are used, and the target shading image and fingerprint image are combined to perform shading removal.
  • the method can obtain the fingerprint image with completely removed shading, thereby alleviating the technical problem of poor shading removal effect of the traditional shading technology for the fingerprint image collected by the fingerprint sensor under the large area screen, thereby improving the fingerprint image with completely removing the shading.
  • the technical effect of the image is not limited to the above description.
  • the embodiment of the present invention also provides a fingerprint image processing device.
  • the fingerprint image processing device is mainly used to execute the fingerprint image processing method provided in the above content of the embodiment of the present invention.
  • the following is a specific description of the fingerprint image processing device provided by the embodiment of the present invention. Introduction.
  • FIG. 13 is a schematic diagram of a fingerprint image processing device according to an embodiment of the present invention. As shown in FIG. 13, the fingerprint image processing device mainly includes:
  • the acquiring unit 10 is configured to acquire an original shading image and a fingerprint image to be processed, wherein the original shading image and the fingerprint image to be processed are images collected by the same fingerprint sensor, and the original background image
  • the fingerprint image and the fingerprint image to be processed are the images after denoising;
  • the interception unit 20 is configured to intercept the shading image corresponding to the fingerprint image to be processed from the original shading image to obtain a target shading image;
  • the determining unit 30 is configured to determine the fingerprint information in the fingerprint image to be processed in combination with the target shading image and the fingerprint image to be processed, and use the fingerprint information to determine the fingerprint image after the shading is removed.
  • the original shading image and the fingerprint image to be processed are acquired; then, the shading image corresponding to the fingerprint image to be processed is intercepted from the original shading image, so as to obtain the target shading image; finally, Combine the captured target shading image and the fingerprint image to be processed to determine the fingerprint image after shading is removed.
  • the method of removing the shading of the fingerprint image can obtain a fingerprint image that has completely removed the shading, thereby alleviating the technical problem of poor shading removal effect of the fingerprint image collected by the fingerprint sensor under the large-area screen by the traditional shading technology, thereby improving the yield The technical effect of completely removing the shading fingerprint image.
  • the acquiring unit is configured to: perform Fourier transform on the shading image collected by the sensor to obtain a target frequency domain image; use a mask image to remove high frequency components in the target frequency domain image to obtain a filtered image; Perform inverse Fourier transform on the filtered image to obtain the original shading image.
  • the acquiring unit is further configured to: determine a first designated area in the target frequency domain image, wherein the first designated area corresponds to a second designated area, and the second designated area is a mask image
  • the middle pixel value is the area of the target value; the frequency value in the first designated area is set as the target value, and the image after the setting is determined as the filtered image.
  • the intercepting unit is configured to: determine a plurality of target coordinates in the original shading image; wherein, the target coordinates are the positions of the vertices of the fingerprint image to be processed in the original shading image Relative coordinates, and the multiple target coordinates can determine the relative area of the fingerprint image to be processed in the original shading image; and the to-be-processed fingerprint image is intercepted from the original shading image based on the multiple target coordinates.
  • the shading image corresponding to the processed fingerprint image is used to obtain the target shading image.
  • the acquiring unit is further configured to: acquire an original fingerprint image; use a frequency domain filtering algorithm to remove common mode noise in the original fingerprint image to obtain the fingerprint image to be processed.
  • the determining unit is configured to: calculate the difference between the pixel value of the fingerprint image to be processed and the pixel value of the target shading image to obtain a pixel difference, and determine the pixel difference as the Fingerprint information.
  • the determining unit is further configured to: perform image enhancement and denoising processing on the fingerprint information to obtain the target fingerprint image.
  • the device is further configured to: after determining the target fingerprint image by using the fingerprint information, update the update area in the original shading image through the fingerprint image to be processed to obtain a new shading image ; Perform brightness alignment processing on the new shading image to obtain an alignment processing result.
  • the device is further configured to: calculate the average brightness of the updated area in the new shading image to obtain a first average brightness; calculate the average brightness of the unupdated area in the new shading image, Obtain a second brightness average value; calculate a ratio between the first brightness average value and the second brightness average value; use the ratio to perform brightness alignment processing on the new fingerprint image to obtain an alignment processing result.
  • the device is also used to: use the alignment formula Perform brightness alignment processing on the new fingerprint image to obtain the alignment processing result, where ratio is the ratio, Is the pixel value of the unupdated area in the new shading image, As the result of the alignment processing, no_update_area is the unupdated area in the new shading image.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the computing processing device according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.
  • FIG. 14 shows a computing processing device that can implement the method according to the invention.
  • the computing processing device traditionally includes a processor 1010 and a computer program product in the form of a memory 1020 or a computer readable medium.
  • the memory 1020 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 1020 has a storage space 1030 for executing program codes 1031 of any method steps in the above methods.
  • the storage space 1030 for program codes may include various program codes 1031 respectively used to implement various steps in the above method. These program codes can be read from or written into one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards, or floppy disks. Such computer program products are usually portable or fixed storage units as described with reference to FIG. 15.
  • the storage unit may have storage segments, storage spaces, etc. arranged similarly to the memory 1020 in the computing processing device of FIG. 14.
  • the program code can be compressed in an appropriate form, for example.
  • the storage unit includes computer-readable code 1031', that is, code that can be read by a processor such as 1010, which, when run by a computing processing device, causes the computing processing device to execute the method described above. The various steps.
  • the terms “installed”, “connected”, and “connected” should be understood in a broad sense, for example, they may be fixed connections or detachable connections. , Or integrally connected; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be the internal communication between two components.
  • installed e.g., they may be fixed connections or detachable connections. , Or integrally connected; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be the internal communication between two components.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation.
  • multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
  • any reference signs placed between parentheses should not be constructed as a limitation to the claims.
  • the word “comprising” does not exclude the presence of elements or steps not listed in the claims.
  • the word “a” or “an” preceding an element does not exclude the presence of multiple such elements.
  • the invention can be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In the unit claims listing several devices, several of these devices may be embodied in the same hardware item.
  • the use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Input (AREA)
  • Image Processing (AREA)

Abstract

L'invention se rapporte au domaine technique du traitement d'image et concerne un procédé et un appareil de traitement d'image d'empreinte digitale, ainsi qu'un dispositif électronique et un support lisible par ordinateur. Le procédé consiste à : obtenir une image de motif de masse d'origine et une image d'empreinte digitale à traiter, l'image de motif de masse d'origine et l'image d'empreinte digitale à traiter étant des images collectées par le même capteur d'empreinte digitale, et l'image de motif de masse d'origine et l'image d'empreinte digitale à traiter étant des images débruitées ; dans l'image de motif de masse d'origine, recadrer une image de motif de masse correspondant à l'image d'empreinte digitale à traiter de façon à obtenir une image de motif de masse cible ; et en combinant l'image de motif de masse cible et l'image d'empreinte digitale à traiter, déterminer des informations d'empreinte digitale dans l'image d'empreinte digitale à traiter, et utiliser les informations d'empreinte digitale pour déterminer une image d'empreinte digitale suite à l'élimination d'un motif de masse. L'invention résout le problème technique selon lequel une technologie d'élimination de motif de masse classique a un effet d'élimination de motif de masse médiocre sur une image d'empreinte digitale collectée par un capteur d'empreinte digitale sous-écran de grande surface.
PCT/CN2020/119537 2019-12-10 2020-09-30 Procédé et appareil de traitement d'image d'empreinte digitale, dispositif électronique et support lisible par ordinateur WO2021114843A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911265411.2A CN111028176B (zh) 2019-12-10 2019-12-10 指纹图像处理方法、装置、电子设备及计算机可读介质
CN201911265411.2 2019-12-10

Publications (1)

Publication Number Publication Date
WO2021114843A1 true WO2021114843A1 (fr) 2021-06-17

Family

ID=70208765

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/119537 WO2021114843A1 (fr) 2019-12-10 2020-09-30 Procédé et appareil de traitement d'image d'empreinte digitale, dispositif électronique et support lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN111028176B (fr)
WO (1) WO2021114843A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028176B (zh) * 2019-12-10 2023-12-08 天津极豪科技有限公司 指纹图像处理方法、装置、电子设备及计算机可读介质
CN112257571A (zh) * 2020-10-20 2021-01-22 北京集创北方科技股份有限公司 生物特征图像的采集方法以及电子设备
CN112434572B (zh) * 2020-11-09 2022-05-06 北京极豪科技有限公司 指纹图像校准方法、装置、电子设备及存储介质
CN112950503A (zh) * 2021-02-26 2021-06-11 北京小米松果电子有限公司 训练样本的生成方法及装置、真值图像的生成方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010182328A (ja) * 2010-03-24 2010-08-19 Mitsubishi Electric Corp 指紋画像撮像装置
CN109858418A (zh) * 2019-01-23 2019-06-07 上海思立微电子科技有限公司 指纹图像的处理方法和装置
CN110348425A (zh) * 2019-07-19 2019-10-18 北京迈格威科技有限公司 去底纹的方法、装置、设备及计算机可读存储介质
CN110363121A (zh) * 2019-07-01 2019-10-22 Oppo广东移动通信有限公司 指纹图像处理方法及装置、存储介质和电子设备
CN111028176A (zh) * 2019-12-10 2020-04-17 北京迈格威科技有限公司 指纹图像处理方法、装置、电子设备及计算机可读介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147861A (zh) * 2011-05-17 2011-08-10 北京邮电大学 一种基于颜色-纹理双重特征向量进行贝叶斯判决的运动目标检测方法
KR101468381B1 (ko) * 2013-12-12 2014-12-03 주식회사 슈프리마 지문 영상 획득 방법 및 장치
CN103824071A (zh) * 2014-02-21 2014-05-28 江苏恒成高科信息科技有限公司 带静电放电功能的静电电容检测型指纹读取传感器
US9704015B2 (en) * 2015-11-04 2017-07-11 Himax Technologies Limited Fingerprint image processing method and device
CN107194932B (zh) * 2017-04-24 2020-05-05 江苏理工学院 一种基于指数遗忘的自适应背景重建算法
CN108121946B (zh) * 2017-11-15 2021-08-03 大唐微电子技术有限公司 一种指纹图像预处理方法及装置
CN110263667B (zh) * 2019-05-29 2022-02-22 Oppo广东移动通信有限公司 图像数据处理方法、装置以及电子设备
CN110232349B (zh) * 2019-06-10 2020-07-03 北京迈格威科技有限公司 屏下指纹去底纹方法、装置、计算机设备和存储介质
CN110298316A (zh) * 2019-06-29 2019-10-01 Oppo广东移动通信有限公司 指纹识别方法及相关产品
CN110309776A (zh) * 2019-06-29 2019-10-08 Oppo广东移动通信有限公司 指纹模板录入方法及相关产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010182328A (ja) * 2010-03-24 2010-08-19 Mitsubishi Electric Corp 指紋画像撮像装置
CN109858418A (zh) * 2019-01-23 2019-06-07 上海思立微电子科技有限公司 指纹图像的处理方法和装置
CN110363121A (zh) * 2019-07-01 2019-10-22 Oppo广东移动通信有限公司 指纹图像处理方法及装置、存储介质和电子设备
CN110348425A (zh) * 2019-07-19 2019-10-18 北京迈格威科技有限公司 去底纹的方法、装置、设备及计算机可读存储介质
CN111028176A (zh) * 2019-12-10 2020-04-17 北京迈格威科技有限公司 指纹图像处理方法、装置、电子设备及计算机可读介质

Also Published As

Publication number Publication date
CN111028176A (zh) 2020-04-17
CN111028176B (zh) 2023-12-08

Similar Documents

Publication Publication Date Title
WO2021114843A1 (fr) Procédé et appareil de traitement d'image d'empreinte digitale, dispositif électronique et support lisible par ordinateur
CN108875723B (zh) 对象检测方法、装置和系统及存储介质
WO2015043382A1 (fr) Appareil et procédé de capture d'images applicables à un dispositif de capture d'écran
CN112102164B (zh) 一种图像处理方法、装置、终端及存储介质
CN109215037B (zh) 目标图像分割方法、装置及终端设备
CN108596916B (zh) 一种颜色相近的水印识别方法、系统、终端及介质
JP5388835B2 (ja) 情報処理装置及び情報処理方法
JP6401855B2 (ja) Uiコントロールの背景を設定するための方法及び装置、並びに端末
WO2020232910A1 (fr) Procédé et appareil de comptage de cibles basés sur un traitement d'image, dispositif et support de stockage
CN107610059B (zh) 一种图像处理方法及移动终端
WO2021036442A1 (fr) Procédé et appareil de filtration lisse préservant les bords cycliques, et dispositif électronique
CN113781406B (zh) 电子元器件的划痕检测方法、装置及计算机设备
JP5849206B2 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
CN110751218A (zh) 图像分类方法、图像分类装置及终端设备
CN114298902A (zh) 一种图像对齐方法、装置、电子设备和存储介质
CN108229583B (zh) 一种基于主方向差分特征的快速模板匹配的方法及装置
Yung et al. Efficient feature-based image registration by mapping sparsified surfaces
CN112634235A (zh) 产品图像的边界检测方法和电子设备
CN112149570A (zh) 多人活体检测方法、装置、电子设备及存储介质
WO2018053710A1 (fr) Procédé de traitement morphologique d'image numérique et dispositif de traitement d'image numérique
WO2020082731A1 (fr) Dispositif électronique, procédé de reconnaissance de justificatif d'identité et support d'informations
WO2022199395A1 (fr) Procédé de détection d'activité faciale, dispositif terminal et support de stockage lisible par ordinateur
CN111754435A (zh) 图像处理方法、装置、终端设备及计算机可读存储介质
CN110222576B (zh) 拳击动作识别方法、装置和电子设备
CN111768345A (zh) 身份证背面图像的校正方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20898332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20898332

Country of ref document: EP

Kind code of ref document: A1