CN114004744B - Fingerprint splicing method and device, electronic equipment and medium - Google Patents

Fingerprint splicing method and device, electronic equipment and medium Download PDF

Info

Publication number
CN114004744B
CN114004744B CN202111204112.5A CN202111204112A CN114004744B CN 114004744 B CN114004744 B CN 114004744B CN 202111204112 A CN202111204112 A CN 202111204112A CN 114004744 B CN114004744 B CN 114004744B
Authority
CN
China
Prior art keywords
image
pixel
target
fingerprint
edge area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111204112.5A
Other languages
Chinese (zh)
Other versions
CN114004744A (en
Inventor
王玉坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yaliote Technology Co ltd
Original Assignee
Shenzhen Yaliote Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yaliote Technology Co ltd filed Critical Shenzhen Yaliote Technology Co ltd
Priority to CN202111204112.5A priority Critical patent/CN114004744B/en
Publication of CN114004744A publication Critical patent/CN114004744A/en
Application granted granted Critical
Publication of CN114004744B publication Critical patent/CN114004744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application relates to a fingerprint splicing method, which is characterized by comprising the following steps: cutting to obtain edge area images of a plurality of fingerprint images of a user, determining an edge area image with coincidence according to global features of the edge area images, identifying a coincidence area of any image in the edge area image and other images which are not selected, further selecting any pixel point from the coincidence area as a reference point, constructing a coordinate system in the images with coincidence according to the reference point, and splicing the images with coincidence according to the constructed coordinate system. In addition, the application also relates to a fingerprint splicing method, a fingerprint splicing device, fingerprint splicing equipment and a storage medium. This application can solve the problem that inefficiency when splice the fingerprint.

Description

Fingerprint splicing method and device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a fingerprint stitching method, apparatus, electronic device, and computer readable storage medium.
Background
When the fingerprint is collected, a user is usually required to press the center, the left side, the right side, the upper side, the lower side and the like of the finger for multiple times in a preset area, and the fingerprint areas collected by pressing each time are spliced, so that a complete fingerprint image of the user can be obtained.
In the prior art, when different collected fingerprint areas are spliced, feature analysis is required to be performed on all pixel points in each collected fingerprint area so as to identify the feature information of the fingerprint area, and then all the fingerprint areas are spliced into complete fingerprints according to the identified feature information. In the method, as all the pixel points in different fingerprint areas need to be analyzed for many times, a large amount of calculation resources are occupied, and the efficiency of fingerprint splicing is low.
Disclosure of Invention
The application provides a fingerprint splicing method, a fingerprint splicing device and a storage medium, so as to solve the problem of low efficiency in splicing fingerprints.
In a first aspect, the present application provides a fingerprint stitching method, the method including:
acquiring a plurality of fingerprint images of a user, and cutting out an edge area image of each fingerprint image;
extracting global features of each edge region image, selecting one of the edge region images as a target image, and respectively calculating the coincidence degree between the target image and the other unselected edge region images according to the global features;
selecting the edge area image with the largest contact ratio from other unselected edge area images as an image to be spliced, and identifying the contact area between the image to be spliced and the target image;
Selecting one pixel point from the overlapping area as a target pixel point at will, respectively constructing a first coordinate system in the target image by taking the target pixel point as the same reference point, and constructing a second coordinate system in the image to be spliced;
traversing the position coordinates of each pixel point in the first coordinate system and the second coordinate system, and splicing the target image and the image to be spliced according to the position coordinates.
In detail, the cropping out the edge area image of each fingerprint image includes:
selecting one fingerprint image from the fingerprint images;
measuring the size of a fingerprint in the fingerprint image and selecting a center pixel of the fingerprint;
and calculating a clipping range according to the size, clipping the selected fingerprint image according to the central pixel and the clipping range, and obtaining an edge area image of the selected fingerprint image.
In detail, the extracting the global feature of each edge area image includes:
selecting one of the edge area images one by one from the edge area images, and counting the pixel value of each pixel point in the selected edge area image;
Taking the maximum pixel value and the minimum pixel value in the pixel values as parameters of a preset mapping function, and mapping the pixel value of each pixel point in the selected edge area image into a preset range by utilizing the preset function;
and calculating the pixel gradient of each row of pixels in the mapped edge region image, converting the pixel gradient of each row of pixels into row vectors, and splicing the row vectors into global features of the edge region image.
In detail, the identifying the overlapping area of the image to be spliced and the target image includes:
utilizing a pre-constructed sliding window to frame and select the areas in the images to be spliced one by one to obtain a pixel window;
selecting one pixel point from the pixel window one by one as a target pixel point;
judging whether the pixel value of the target pixel point is an extremum in the pixel window;
when the pixel value of the target pixel point is not an extremum in the pixel window, returning to the step of selecting one pixel point from the pixel window one by one as the target pixel point;
when the pixel value of the target pixel point is an extremum in the pixel window, determining the target pixel point as a key point;
Vectorizing pixel values of all key points in all pixel windows, and collecting the obtained vectors into local features of the pixel windows;
and determining the superposition area of the image to be spliced and the target image according to the local characteristics of the image to be spliced and the local characteristics of the target image.
In detail, the determining the overlapping area of the image to be stitched and the target image according to the local feature of the image to be stitched and the local feature of the target image includes:
selecting one pixel window from the pixel windows of the images to be spliced one by one as a target window, and calculating distance values between local features of the target window and local features of other pixel windows of the target image;
when no pixel window with the distance value smaller than the preset threshold value between the local features of the target window exists, returning to the step of selecting one pixel window from the pixel windows of the images to be spliced one by one as the target window;
and when a pixel window with the distance value smaller than a preset threshold value between the local features of the target window exists, determining a region selected by the pixel window in the image to be spliced in a frame mode and a region selected by the target window in the target image as a overlapping region.
In detail, the constructing a first coordinate system in the target image and a second coordinate system in the image to be stitched with the target pixel point as the same reference point respectively includes:
constructing an abscissa from an origin in a horizontal direction and an ordinate from the origin in a vertical direction by taking any point in the target image as the origin;
measuring the vertical distance between the reference point and the abscissa or the ordinate, multiplying the vertical distance by a preset scaling factor, and taking the multiplied result as a unit scale;
performing scale marking on the horizontal coordinate and the vertical coordinate by utilizing the unit scale to obtain a first coordinate system;
and determining the coordinate value of the reference point in the first coordinate system, determining the origin of the second coordinate system according to the coordinate value of the reference point in the image to be spliced, and marking the second coordinate system according to the unit scale to obtain the second coordinate system.
In detail, the stitching the target image and the image to be stitched according to the position coordinates includes:
randomly selecting an image from the target image and the image to be spliced, and traversing the pixel value and the position coordinate of each pixel point in the selected image;
And filling the pixel value of each pixel point in the selected image into the unselected image according to the position coordinates, and completing the splicing of the target image and the image to be spliced.
In a second aspect, the present application provides a fingerprint stitching device, the device comprising:
the image clipping module is used for acquiring a plurality of fingerprint images of a user and clipping out an edge area image of each fingerprint image;
the first feature extraction module is used for extracting global features of each edge region image, selecting one of the edge region images as a target image, and respectively calculating the coincidence ratio between the target image and the other unselected edge region images according to the global features;
the overlapping region screening module is used for selecting the edge region image with the largest overlapping ratio from other unselected edge region images as an image to be spliced, and identifying the overlapping region of the image to be spliced and the target image;
the coordinate system construction module is used for arbitrarily selecting one pixel point from the overlapping area as a target pixel point, respectively constructing a first coordinate system in the target image by taking the target pixel point as the same reference point, and constructing a second coordinate system in the image to be spliced;
And the image stitching module is used for traversing the position coordinates of each pixel point in the first coordinate system and the second coordinate system, and stitching the target image and the image to be stitched according to the position coordinates.
In a third aspect, an audio tracing device based on privacy information is provided, including a processor, a communication interface, a memory and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the fingerprint splicing method according to any embodiment of the first aspect when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, implements the steps of the fingerprint stitching method according to any one of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the method provided by the embodiment of the application, the fingerprint image can be cut, the number of pixels required to be analyzed when the image is spliced is reduced, and the efficiency of fingerprint splicing is improved; meanwhile, through analyzing the residual image areas after clipping, the areas which are overlapped with each other in the image are obtained, and a coordinate system is established according to the pixel points in the overlapped areas to splice the image, so that each pixel in the image is not required to be analyzed, the image is spliced by directly utilizing the mapping relation of the coordinate system, the image splicing efficiency is further improved, and the problem of low efficiency in splicing fingerprints can be solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a fingerprint stitching method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of identifying a superposition area of an image to be stitched and a target image according to an embodiment of the present application;
fig. 3 is a schematic flow chart of stitching a target image and an image to be stitched according to position coordinates provided in an embodiment of the present application;
fig. 4 is a schematic block diagram of a fingerprint splicing device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device for fingerprint stitching according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
Fig. 1 is a schematic flow chart of a fingerprint stitching method according to an embodiment of the present application. In this embodiment, the fingerprint stitching method includes:
s1, acquiring a plurality of fingerprint images of a user, and cutting out an edge area image of each fingerprint image.
In this embodiment of the present application, the fingerprint images belong to the same user, and the user fingerprint is collected by the fingerprint collection device, so as to obtain the fingerprint images of the user.
For example, a fingerprint image generated by a user pressing on a preset pressure plate is acquired, or a fingerprint image acquired by an image acquisition device (camera, video recorder, etc.) of the user is acquired.
In one of the practical application scenes of the application, because people's finger is the curved surface, consequently, when fingerprint collection, can acquire diversified fingerprint image such as center, left side, right side, upside, downside, and then splice the fingerprint image that different positions obtained into complete fingerprint, but because all there is a large amount of pixel information in every fingerprint image, if directly analyze every fingerprint image, can occupy a large amount of computational resources.
According to the embodiment of the application, after the plurality of fingerprint images of the user are acquired, each fingerprint image is cut, the edge area image with a certain range in each fingerprint image is acquired, and then the acquired fingerprint images with different directions are spliced into the complete fingerprint according to the analysis result only by analyzing the edge area image, so that all pixel information in each fingerprint image is prevented from being analyzed, occupation of computing resources is reduced, and fingerprint splicing efficiency is improved.
In this embodiment of the present application, the cropping out the edge area image of each fingerprint image includes:
selecting one fingerprint image from the fingerprint images;
measuring the size of a fingerprint in the fingerprint image and selecting a center pixel of the fingerprint;
and calculating a clipping range according to the size, clipping the selected fingerprint image according to the central pixel and the clipping range, and obtaining an edge area image of the selected fingerprint image.
In detail, the size of the fingerprint in the selected fingerprint image may be measured by a tool (such as a range finder, a measuring ruler, etc.) having a size measuring function, where the size is the height and width (for example, the height is 1000 pixels and the width is 800 pixels) of the fingerprint in the fingerprint image, and the center pixel is the pixel point of the center of the fingerprint in the fingerprint image.
Specifically, the calculating the clipping range according to the size includes:
calculating a clipping range according to the size by using the following proportion algorithm:
F=α*C
wherein F is the clipping range, C is the size of the fingerprint in the selected fingerprint image, and alpha is a preset range coefficient.
For example, if the height of the fingerprint in the fingerprint image is 1000 pixels, the width is 800 pixels, and the preset range coefficient α=0.8, the clipping range is calculated to be 800 pixels for height clipping and 640 pixels for width clipping.
In this embodiment of the present application, the pixel points in the neighborhood of the central pixel may be clipped according to the clipping range, for example, when the clipping range is 800 pixels in height clipping and 640 pixels in width clipping, the pixel points in the upper and lower 400 pixel ranges and the pixel points in the left and right 320 pixel ranges of the central pixel are clipped, and after clipping, the remaining area is used as the edge area image of the target image.
S2, extracting global features of each edge region image, selecting one of the edge region images as a target image, and respectively calculating the coincidence degree between the target image and the unselected edge region image according to the global features.
In this embodiment of the present application, cut out many fingerprint images, can obtain the marginal area image the same with many fingerprint images's quantity, in order to splice many fingerprint images into complete fingerprint, need handle every marginal area image to judge whether there is the condition of coincidence in marginal area image, and then utilize the region of coincidence to splice many fingerprint images.
In this embodiment of the present application, the global feature of each edge area image may be extracted to analyze the edge area image, so as to avoid directly performing detailed analysis on pixels in the edge area image, so as to improve analysis efficiency, where the global feature includes a color feature, a shape feature, a texture feature, and other features of the image that are used to describe the whole image.
In this embodiment of the present application, one of the edge area images may be selected one by one from the edge area images as a target image, global features of the target image may be extracted, and the step of selecting the target image may be returned until the global features of each of the edge area images are extracted.
In the embodiment of the application, the global feature of the edge region image can be extracted by adopting a mode of HOG (Histogram of Oriented Gradient, direction gradient histogram), DPM (Deformable Part Model, variability component model), LBP (Local Binary Patterns, local binary pattern) and the like.
In one embodiment of the present application, the extracting global features of each edge area image includes:
selecting one of the edge area images one by one from the edge area images, and counting the pixel value of each pixel point in the selected edge area image;
taking the maximum pixel value and the minimum pixel value in the pixel values as parameters of a preset mapping function, and mapping the pixel value of each pixel point in the selected edge area image into a preset range by utilizing the preset function;
and calculating the pixel gradient of each row of pixels in the mapped edge region image, converting the pixel gradient of each row of pixels into row vectors, and splicing the row vectors into global features of the edge region image.
Illustratively, the preset mapping function may be:
Figure GDA0004093030720000071
wherein Y is i For the pixel value, x after the i-th pixel point in the selected edge area image is mapped to the preset range i For the pixel value of the i-th pixel point in the selected edge area image, max (X) is the maximum pixel value in the selected edge area image, and min () is the minimum pixel value in the selected edge area image.
Further, a preset gradient algorithm may be used to calculate the pixel gradient of each row of pixels in the mapped edge region image, where the gradient algorithm includes, but is not limited to, a two-dimensional discrete derivative algorithm, a solid operator, and the like.
In the embodiment of the application, the pixel gradient of each row of pixels can be converted into a row vector and spliced into the global feature of the edge area image.
For example, the selected edge area image includes three rows of pixels, where the pixel gradients of the first row of pixels are a, b, c, the pixel gradients of the second row of pixels are d, e, f, and the pixel gradients of the third row of pixels are g, h, i, and then the pixel gradients of each row of pixels can be respectively used as row vectors to be spliced into the following global features:
Figure GDA0004093030720000072
in this embodiment of the present application, one of the edge area images may be selected one by one from the edge area images as a target image, and the contact ratio between the target image and the unselected image in the edge area image may be calculated respectively.
In detail, the calculating the coincidence ratio between the target image and the unselected image in the edge area image according to the global feature includes:
and respectively calculating the contact ratio between the target image and the unselected image in the edge area image by using the following contact ratio algorithm:
Figure GDA0004093030720000073
wherein Cov is the contact ratio, a is the global feature of the target image, b n Is a global feature of the nth edge area image.
S3, selecting the edge area image with the largest contact ratio from other unselected edge area images as an image to be spliced, and identifying the contact area between the image to be spliced and the target image.
In this embodiment of the present application, an edge area image with the largest contact ratio with the target image may be selected from the edge area images as an image to be stitched.
For example, an image a, an image B and an image C exist in the edge image, wherein the contact ratio of the global feature of the image a and the global feature of the target image is 30, the contact ratio of the global feature of the image B and the global feature of the target image is 80, and the contact ratio of the global feature of the image C and the global feature of the target image is 50, and then the image B is selected and used as an image to be stitched of the target image.
Further, since the global feature is a feature for identifying the whole image, but when the images are spliced, a completely overlapped part in different images needs to be found so as to improve the accuracy of image splicing, in the embodiment of the application, the local feature of the image to be spliced and the local feature of the target image can be extracted.
In this embodiment, local features of the target image and the image to be spliced may be extracted by using a LOG (Laplacian of Gaussian, laplace of gaussian operator detection), DOH (Dot of Hessian, speckle detection), SIFT (Scale-invariant feature transform ), and other methods, where the local features include, but are not limited to, speckle and corner points.
In one embodiment of the present application, referring to fig. 2, the identifying the overlapping area of the image to be stitched and the target image includes:
s21, utilizing a pre-constructed sliding window to frame and select the areas in the images to be spliced one by one to obtain a pixel window;
s22, selecting one pixel point from the pixel window one by one as a target pixel point;
s23, judging whether the pixel value of the target pixel point is an extremum in the pixel window;
Returning to S22 when the pixel value of the target pixel point is not an extremum in the pixel window;
when the pixel value of the target pixel point is an extremum in the pixel window, executing S24, and determining the target pixel point as a key point;
s25, vectorizing pixel values of all key points in all pixel windows, and collecting the obtained vectors as local features of the pixel windows;
s26, determining the superposition area of the image to be spliced and the target image according to the local characteristics of the image to be spliced and the local characteristics of the target image.
In this embodiment of the present application, the sliding window may be a pre-constructed selection frame with a certain area, which may be used to perform frame selection on pixels in the image to be stitched, for example, a square selection frame constructed with 10 pixels as a height and 10 pixels as a width.
In detail, the extremum includes a maximum value and a minimum value, and when the pixel value of the target pixel point is the maximum value or the minimum value in the pixel window, the target pixel point is determined to be the key point of the pixel window.
Specifically, the step of vectorizing the pixel values of all the key points in the pixel window is consistent with the step of calculating the pixel gradient of each row of pixels in the mapped edge area image and converting the pixel gradient of each row of pixels into a row vector in S2, and will not be described again.
Further, the steps of extracting the local features of the images to be spliced and extracting the local features of the target image are consistent with the steps of extracting the local features of the images to be spliced, and are not repeated again.
In this embodiment of the present application, since the local feature is a detailed feature of some pixel points in the target image and the image to be stitched, a region where the image to be stitched coincides with the target image may be determined by using the local feature of the image to be stitched and the local feature of the target image.
In this embodiment of the present application, the determining, according to the local feature of the image to be stitched and the local feature of the target image, the overlapping area of the image to be stitched and the target image includes:
selecting one pixel window from the pixel windows of the images to be spliced one by one as a target window, and calculating distance values between local features of the target window and local features of other pixel windows of the target image;
judging whether a pixel window with a distance value smaller than a preset threshold value between the pixel window of the image to be spliced and the local feature of the target window exists or not;
When no pixel window with the distance value smaller than the preset threshold value between the local features of the target window exists, returning to the step of selecting one pixel window from the pixel windows of the images to be spliced one by one as the target window;
and when a pixel window with the distance value smaller than a preset threshold value between the local features of the target window exists, determining a region selected by the pixel window in the image to be spliced in a frame mode and a region selected by the target window in the target image as a overlapping region.
In detail, the calculating the distance value between the local feature of the target window and the local feature of the other pixel windows of the target image includes:
calculating distance values between the local features of the target window and the local features of other pixel windows of the target image by using the following distance value algorithm:
Figure GDA0004093030720000091
wherein D is the distance value, p is the local feature of the target window selected from the pixel windows of the images to be spliced, q m Is a local feature of an mth pixel window in the target image.
In the embodiment of the present application, when there is no pixel window whose distance value is smaller than a preset threshold value from among the pixel windows of the target image, it is indicated that there is no region framed by the pixel windows of the target image in the image to be spliced, and a step of selecting one of the pixel windows one by one from among the pixel windows of the image to be spliced is returned as the target window; when a pixel window with the distance value smaller than a preset threshold value exists in the pixel window of the target image, the pixel window with the distance value smaller than the preset threshold value in the pixel window of the target image is indicated to be in a frame selected area in the image to be spliced, and the frame selected area of the target window in the target image is overlapped.
S4, randomly selecting one pixel point from the overlapping area as a target pixel point, respectively constructing a first coordinate system in the target image by taking the target pixel point as the same reference point, and constructing a second coordinate system in the image to be spliced.
In this embodiment of the present invention, since all the pixel points in the overlapping area of the image to be stitched and the target image overlap, one of the pixel points may be randomly selected from the overlapping area as a target pixel point, the target pixel point is used as a reference point to construct a first coordinate system in the target image, and the target pixel point is used as a reference point to construct a second coordinate system in the image to be stitched.
In this embodiment of the present application, the constructing a first coordinate system in the target image and constructing a second coordinate system in the image to be stitched with the target pixel point as the same reference point includes:
constructing an abscissa from an origin in a horizontal direction and an ordinate from the origin in a vertical direction by taking any point in the target image as the origin;
measuring the vertical distance between the reference point and the abscissa or the ordinate, multiplying the vertical distance by a preset scaling factor, and taking the multiplied result as a unit scale;
Performing scale marking on the horizontal coordinate and the vertical coordinate by utilizing the unit scale to obtain a first coordinate system;
and determining the coordinate value of the reference point in the first coordinate system, determining the origin of the second coordinate system according to the coordinate value of the reference point in the image to be spliced, and marking the second coordinate system according to the unit scale to obtain the second coordinate system.
For example, a pixel point other than the reference point is arbitrarily selected from the target image, the selected pixel point is taken as an origin, an abscissa is constructed from the origin along a horizontal direction, and an ordinate is constructed from the origin along a vertical direction, after measurement, a distance value between the reference point and the abscissa or the ordinate is 10, the vertical distance can be multiplied by a preset scaling factor (such as one tenth), the multiplied result is taken as a unit scale, and the unit scale is utilized to scale the abscissa and the ordinate, so as to obtain a first coordinate system.
Further, after the coordinate value of the reference point in the first coordinate system is obtained, an origin of a second coordinate system can be determined in the image to be spliced according to the coordinate value identical to the reference point, and the second coordinate system can be constructed in the image to be spliced according to the determined origin of the second coordinate system.
In one embodiment of the present application, the reference point may be used as an origin, an abscissa is constructed from the reference point along a horizontal direction, an ordinate is constructed from the reference point along a vertical direction, and scales are marked on the abscissa and the ordinate according to a preset length unit scale, so as to obtain a first coordinate system.
In this embodiment of the present application, the step of constructing the second coordinate system in the image to be spliced by using the target pixel point as a reference point is consistent with the step of constructing the first coordinate system in the target image by using the target pixel point as a reference point, and will not be described again.
In this embodiment of the present application, through the pixel points (reference points) overlapping in the image to be stitched and the target image, a coordinate system is respectively constructed in the image to be stitched and the target image, so that the coordinates of each pixel in the image to be stitched and the target image and the coordinates of the reference point are unified into the same plane coordinate system, so as to facilitate subsequent accurate fingerprint stitching.
And S5, traversing the position coordinates of each pixel point in the first coordinate system and the second coordinate system, and splicing the target image and the image to be spliced according to the position coordinates.
In this embodiment of the present invention, since the position coordinates of all the pixels in the image to be stitched and the target image are determined according to the coordinates of the reference point, the image to be stitched and the target image may be stitched together by traversing the position coordinates of each pixel in the first coordinate system and the second coordinate system, thereby according to the position coordinates of each pixel in the image to be stitched and the target image.
In this embodiment, referring to fig. 3, the stitching, according to the position coordinates, the target image and the image to be stitched, includes:
s31, randomly selecting an image from the target image and the image to be spliced, and traversing the pixel value and the position coordinate of each pixel point in the selected image;
s32, filling the pixel value of each pixel point in the selected image into the unselected image according to the position coordinates, and completing the splicing of the target image and the image to be spliced.
For example, the selected image includes a pixel a and a pixel B, where the position coordinate of the pixel a is (3, 5), the pixel value of the pixel a is 100, the position coordinate of the pixel B is (4, 6), and the pixel value of the pixel B is 200, and the pixel value of the pixel with the position coordinate of (3, 5) in the unselected image may be set to 100, and the pixel value of the pixel with the position coordinate of (4, 6) in the unselected image may be set to 200.
In this embodiment of the present application, after the stitching, according to the position coordinates, the target image and the image to be stitched, the method further includes: and selecting one of the edge area images one by one from the edge area images as a target image to splice until all the fingerprint images of the user are spliced, so as to obtain the complete fingerprint of the user.
According to the method provided by the embodiment of the application, the fingerprint image can be cut, the number of pixels required to be analyzed when the image is spliced is reduced, and the efficiency of fingerprint splicing is improved; meanwhile, through analyzing the residual image areas after clipping, the areas which are overlapped with each other in the image are obtained, and a coordinate system is established according to the pixel points in the overlapped areas to splice the image, so that each pixel in the image is not required to be analyzed, the image is spliced by directly utilizing the mapping relation of the coordinate system, the image splicing efficiency is further improved, and the problem of low efficiency in splicing fingerprints can be solved.
As shown in fig. 4, an embodiment of the present application provides a schematic block diagram of a fingerprint splicing device 10, where the fingerprint splicing device 10 includes: the system comprises an image clipping module 11, a first feature extraction module 12, a superposition area screening module 13, a coordinate system construction module 14 and an image stitching module 15.
The image clipping module 11 is configured to obtain a plurality of fingerprint images of a user, and clip an edge area image of each fingerprint image;
the first feature extraction module 12 is configured to extract global features of each of the edge area images, select one of the edge area images as a target image, and respectively calculate a degree of coincidence between the target image and the other unselected edge area images according to the global features;
the overlapping region screening module 13 is configured to select, from the other unselected edge region images, an edge region image with the largest overlapping ratio as an image to be stitched, and identify an overlapping region of the image to be stitched and the target image;
the coordinate system construction module 14 is configured to arbitrarily select one pixel point from the overlapping area as a target pixel point, respectively construct a first coordinate system in the target image by using the target pixel point as the same reference point, and construct a second coordinate system in the image to be spliced;
the image stitching module 15 is configured to traverse the position coordinates of each pixel point in the first coordinate system and the second coordinate system, and stitch the target image and the image to be stitched according to the position coordinates.
As shown in fig. 5, the embodiment of the application provides an audio tracing device based on privacy information, which includes a processor 111, a communication interface 112, a memory 113 and a communication bus 114, wherein the processor 111, the communication interface 112 and the memory 113 complete communication with each other through the communication bus 114,
a memory 113 for storing a computer program;
in one embodiment of the present application, the processor 111 is configured to implement the XX control method provided in any one of the foregoing method embodiments when executing the program stored in the memory 113, where the method includes:
acquiring a plurality of fingerprint images of a user, and cutting out an edge area image of each fingerprint image;
extracting global features of each edge region image, selecting one of the edge region images as a target image, and respectively calculating the coincidence degree between the target image and the other unselected edge region images according to the global features;
selecting the edge area image with the largest contact ratio from other unselected edge area images as an image to be spliced, and identifying the contact area between the image to be spliced and the target image;
selecting one pixel point from the overlapping area as a target pixel point at will, respectively constructing a first coordinate system in the target image by taking the target pixel point as the same reference point, and constructing a second coordinate system in the image to be spliced;
Traversing the position coordinates of each pixel point in the first coordinate system and the second coordinate system, and splicing the target image and the image to be spliced according to the position coordinates.
The present application further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the fingerprint stitching method provided in any one of the method embodiments described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method of fingerprint stitching, the method comprising:
acquiring a plurality of fingerprint images of a user, and cutting out an edge area image of each fingerprint image;
extracting global features of each edge region image, selecting one edge region image as a target image one by one, and respectively calculating the coincidence degree between the target image and the other unselected edge region images according to the global features;
selecting the edge area image with the largest contact ratio from other unselected edge area images as an image to be spliced, and identifying the contact area between the image to be spliced and the target image;
Selecting one pixel point from the overlapping area as a target pixel point at will, respectively constructing a first coordinate system in the target image by taking the target pixel point as the same reference point, and constructing a second coordinate system in the image to be spliced;
traversing the position coordinates of each pixel point in the first coordinate system and the second coordinate system, and splicing the target image and the image to be spliced according to the position coordinates;
the extracting global features of each edge area image includes:
selecting one of the edge area images one by one from the edge area images, and counting the pixel value of each pixel point in the selected edge area image;
taking the maximum pixel value and the minimum pixel value in the pixel values as parameters of a preset mapping function, and mapping the pixel value of each pixel point in the selected edge area image into a preset range by utilizing the preset function;
calculating pixel gradients of each row of pixels in the mapped edge region image, converting the pixel gradients of each row of pixels into row vectors, and splicing the row vectors into global features of the edge region image;
The calculating the coincidence degree between the target image and the other unselected edge area images according to the global features comprises the following steps:
by using the following coincidence algorithm
Figure FDA0004093030710000011
Respectively calculating the coincidence ratio between the target image and the unselected image in the edge area image:
wherein Cov is the contact ratio, a is the global feature of the target image, b n Is a global feature of the nth edge area image.
2. The fingerprint stitching method of claim 1, wherein cropping out an edge area image of each fingerprint image comprises:
selecting one fingerprint image from the fingerprint images;
measuring the size of a fingerprint in the fingerprint image and selecting a center pixel of the fingerprint;
and calculating a clipping range according to the size, clipping the selected fingerprint image according to the central pixel and the clipping range, and obtaining an edge area image of the selected fingerprint image.
3. The fingerprint stitching method according to claim 1, wherein the identifying the overlapping region of the image to be stitched and the target image comprises:
utilizing a pre-constructed sliding window to frame and select the areas in the images to be spliced one by one to obtain a pixel window;
Selecting one pixel point from the pixel window one by one as a target pixel point;
judging whether the pixel value of the target pixel point is an extremum in the pixel window;
when the pixel value of the target pixel point is not an extremum in the pixel window, returning to the step of selecting one pixel point from the pixel window one by one as the target pixel point;
when the pixel value of the target pixel point is an extremum in the pixel window, determining the target pixel point as a key point;
vectorizing pixel values of all key points in all pixel windows, and collecting the obtained vectors into local features of the pixel windows;
and determining the superposition area of the image to be spliced and the target image according to the local characteristics of the image to be spliced and the local characteristics of the target image.
4. A fingerprint stitching method according to claim 3, wherein said determining the region of overlap of the image to be stitched and the target image based on the local features of the image to be stitched and the local features of the target image comprises:
selecting one pixel window from the pixel windows of the images to be spliced one by one as a target window, and calculating distance values between local features of the target window and local features of other pixel windows of the target image;
When no pixel window with the distance value smaller than the preset threshold value between the local features of the target window exists, returning to the step of selecting one pixel window from the pixel windows of the images to be spliced one by one as the target window;
and when a pixel window with the distance value smaller than a preset threshold value between the local features of the target window exists, determining a region selected by the pixel window in the image to be spliced in a frame mode and a region selected by the target window in the target image as a overlapping region.
5. The fingerprint stitching method according to claim 1, wherein the constructing a first coordinate system in the target image and a second coordinate system in the image to be stitched with the target pixel point as the same reference point respectively includes:
constructing an abscissa from an origin in a horizontal direction and an ordinate from the origin in a vertical direction by taking any point in the target image as the origin;
measuring the vertical distance between the reference point and the abscissa or the ordinate, multiplying the vertical distance by a preset scaling factor, and taking the multiplied result as a unit scale;
Performing scale marking on the horizontal coordinate and the vertical coordinate by utilizing the unit scale to obtain a first coordinate system;
and determining the coordinate value of the reference point in the first coordinate system, determining the origin of the second coordinate system according to the coordinate value of the reference point in the image to be spliced, and marking the second coordinate system according to the unit scale to obtain the second coordinate system.
6. The fingerprint stitching method according to any one of claims 1-5, wherein the stitching the target image and the image to be stitched according to the position coordinates comprises:
randomly selecting an image from the target image and the image to be spliced, and traversing the pixel value and the position coordinate of each pixel point in the selected image;
and filling the pixel value of each pixel point in the selected image into the unselected image according to the position coordinates, and completing the splicing of the target image and the image to be spliced.
7. A fingerprint splice device, the device comprising:
the image clipping module is used for acquiring a plurality of fingerprint images of a user and clipping out an edge area image of each fingerprint image;
The first feature extraction module is used for extracting global features of each edge region image, selecting one of the edge region images as a target image, and respectively calculating the coincidence ratio between the target image and the other unselected edge region images according to the global features;
the overlapping region screening module is used for selecting the edge region image with the largest overlapping ratio from other unselected edge region images as an image to be spliced, and identifying the overlapping region of the image to be spliced and the target image;
the coordinate system construction module is used for arbitrarily selecting one pixel point from the overlapping area as a target pixel point, respectively constructing a first coordinate system in the target image by taking the target pixel point as the same reference point, and constructing a second coordinate system in the image to be spliced;
the image stitching module is used for traversing the position coordinates of each pixel point in the first coordinate system and the second coordinate system and stitching the target image and the image to be stitched according to the position coordinates;
the extracting global features of each edge area image includes:
Selecting one of the edge area images one by one from the edge area images, and counting the pixel value of each pixel point in the selected edge area image;
taking the maximum pixel value and the minimum pixel value in the pixel values as parameters of a preset mapping function, and mapping the pixel value of each pixel point in the selected edge area image into a preset range by utilizing the preset function;
calculating pixel gradients of each row of pixels in the mapped edge region image, converting the pixel gradients of each row of pixels into row vectors, and splicing the row vectors into global features of the edge region image;
the calculating the coincidence degree between the target image and the other unselected edge area images according to the global features comprises the following steps:
by using the following coincidence algorithm
Figure FDA0004093030710000041
Respectively calculating the coincidence ratio between the target image and the unselected image in the edge area image:
wherein Cov is the contact ratio, a is the global feature of the target image, b n Is a global feature of the nth edge area image.
8. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
A memory for storing a computer program;
a processor for implementing the steps of the fingerprint stitching method of any one of claims 1-6 when executing a program stored on a memory.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the fingerprint stitching method according to any one of claims 1-6.
CN202111204112.5A 2021-10-15 2021-10-15 Fingerprint splicing method and device, electronic equipment and medium Active CN114004744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111204112.5A CN114004744B (en) 2021-10-15 2021-10-15 Fingerprint splicing method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111204112.5A CN114004744B (en) 2021-10-15 2021-10-15 Fingerprint splicing method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114004744A CN114004744A (en) 2022-02-01
CN114004744B true CN114004744B (en) 2023-04-28

Family

ID=79923062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111204112.5A Active CN114004744B (en) 2021-10-15 2021-10-15 Fingerprint splicing method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114004744B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020591A (en) * 2019-02-01 2019-07-16 敦泰电子有限公司 Fingerprint template register method and fingerprint identification device based on slidingtype sampling
CN112329528A (en) * 2020-09-29 2021-02-05 北京迈格威科技有限公司 Fingerprint input method and device, storage medium and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2004051575A1 (en) * 2002-12-05 2006-04-06 セイコーエプソン株式会社 Feature region extraction apparatus, feature region extraction method, and feature region extraction program
CN104463129B (en) * 2014-12-17 2018-03-02 浙江维尔科技股份有限公司 A kind of fingerprint register method and device
CN105354544A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Fingerprint identification method and apparatus
WO2020223881A1 (en) * 2019-05-06 2020-11-12 深圳市汇顶科技股份有限公司 Fingerprint detection method and apparatus, and electronic device
CN111415298B (en) * 2020-03-20 2023-06-02 北京百度网讯科技有限公司 Image stitching method and device, electronic equipment and computer readable storage medium
CN112511767B (en) * 2020-10-30 2022-08-02 山东浪潮科学研究院有限公司 Video splicing method and device, and storage medium
CN112686806B (en) * 2021-01-08 2023-03-24 腾讯科技(深圳)有限公司 Image splicing method and device, electronic equipment and storage medium
CN112837222A (en) * 2021-01-25 2021-05-25 深圳市奔凯安全技术股份有限公司 Fingerprint image splicing method and device, storage medium and electronic equipment
CN113436068B (en) * 2021-06-10 2022-12-02 浙江大华技术股份有限公司 Image splicing method and device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020591A (en) * 2019-02-01 2019-07-16 敦泰电子有限公司 Fingerprint template register method and fingerprint identification device based on slidingtype sampling
CN112329528A (en) * 2020-09-29 2021-02-05 北京迈格威科技有限公司 Fingerprint input method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN114004744A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
JP6091560B2 (en) Image analysis method
Kumar et al. Sharpness estimation for document and scene images
US8787695B2 (en) Image rectification using text line tracks
JP4772839B2 (en) Image identification method and imaging apparatus
Dickscheid et al. Coding images with local features
US8396285B2 (en) Estimating vanishing points in images
JP2007524950A (en) Object detection method, object detection apparatus, and object detection program
US9058537B2 (en) Method for estimating attribute of object, apparatus thereof, and storage medium
US20140003723A1 (en) Text Detection Devices and Text Detection Methods
US11475707B2 (en) Method for extracting image of face detection and device thereof
CN111160169A (en) Face detection method, device, equipment and computer readable storage medium
CN116071790A (en) Palm vein image quality evaluation method, device, equipment and storage medium
CN114004744B (en) Fingerprint splicing method and device, electronic equipment and medium
JP6754717B2 (en) Object candidate area estimation device, object candidate area estimation method, and object candidate area estimation program
JP5335554B2 (en) Image processing apparatus and image processing method
CN109325489B (en) Image recognition method and device, storage medium and electronic device
US9569681B2 (en) Methods and systems for efficient image cropping and analysis
JP5755516B2 (en) Object shape estimation device
US10140509B2 (en) Information processing for detection and distance calculation of a specific object in captured images
CN108734175A (en) A kind of extracting method of characteristics of image, device and electronic equipment
CN113420859A (en) Two-dimensional code, and method, device and equipment for generating and identifying two-dimensional code
CN107220650B (en) Food image detection method and device
CN115731256A (en) Vertex coordinate detection method, device, equipment and storage medium
JP4812743B2 (en) Face recognition device, face recognition method, face recognition program, and recording medium recording the program
CN116051390B (en) Motion blur degree detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518063 t2-a2-a, hi tech Industrial Village, No. 022, Gaoxin Nanqi Road, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen yaliote Technology Co.,Ltd.

Address before: 518063 t2-a2-a, hi tech Industrial Village, No. 022, Gaoxin Nanqi Road, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN ARATEK BIOMETRICS TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant