CN111797828A - Method and device for acquiring ROI (region of interest) of finger vein image and related equipment - Google Patents

Method and device for acquiring ROI (region of interest) of finger vein image and related equipment Download PDF

Info

Publication number
CN111797828A
CN111797828A CN202010567319.8A CN202010567319A CN111797828A CN 111797828 A CN111797828 A CN 111797828A CN 202010567319 A CN202010567319 A CN 202010567319A CN 111797828 A CN111797828 A CN 111797828A
Authority
CN
China
Prior art keywords
boundary
region
vector
gradient
finger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010567319.8A
Other languages
Chinese (zh)
Other versions
CN111797828B (en
Inventor
王栋
刘永松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN202010567319.8A priority Critical patent/CN111797828B/en
Publication of CN111797828A publication Critical patent/CN111797828A/en
Application granted granted Critical
Publication of CN111797828B publication Critical patent/CN111797828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Abstract

The invention relates to the technical field of image processing, and provides a method, a device and related equipment for acquiring an ROI (region of interest) of a finger vein image, wherein the acquisition method comprises the following steps: acquiring a finger vein image; preprocessing a finger vein image to obtain a gradient image; extracting a plurality of first sections from the gradient image, and respectively acquiring rough boundary positions of fingers in the plurality of first sections; obtaining a reference boundary coordinate of the finger in the gradient image according to the rough boundary position; extracting a plurality of second sections from the gradient image, and respectively acquiring reference average boundary coordinates of fingers in the plurality of second sections according to the reference boundary coordinates; and fitting the reference average boundary coordinates of the plurality of second sections to obtain the actual boundary of the finger, and taking the region formed by the actual boundary as the ROI region. By implementing the method and the device, the problem of low accuracy of ROI (region of interest) positioning in the process of finger vein identification technology in the prior art can be solved.

Description

Method and device for acquiring ROI (region of interest) of finger vein image and related equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for acquiring an ROI (region of interest) of a finger vein image and related equipment.
Background
In recent years, with the intensive research and progress of the deep learning method, the biometric technology has been widely used in many fields. At present, it is seen everywhere in life that more and more products have adopted different biometric technologies to replace the traditional password, password card and other ways to perform identity authentication and directly input information through voice and other ways to replace the traditional typing input mode. The finger vein recognition technology is one of the biological recognition technologies, utilizes unique physiological or behavior characteristics of a human body to automatically recognize personal identity, has the advantages of no need of memorizing passwords, high uniqueness, difficulty in being stolen and the like, and is simple, convenient and quick to operate, so that the finger vein recognition technology is widely researched and developed rapidly in recent years.
At present, the existing finger vein recognition technology generally needs to obtain a region of interest (ROI) in a finger vein image, and then perform recognition and matching on the finger vein. Because some boundaries in front of some finger vein images and the background are very fuzzy and even affected by near-infrared illumination, some boundary regions in front of the images and the background present consistent gray scale distribution, for example, the regions present consistent highlight during weak exposure, while some images present slow gradual gray scale change in the front region and the background region, so that the real shapes of the boundaries of the front region and the background are completely covered, a precise ROI region cannot be obtained, the obtained characteristics are not accurate description of the real vein distribution, and the accuracy of final identification matching is affected.
Therefore, in summary, the accuracy of positioning the ROI region is low in the process of the finger vein recognition technology in the prior art.
Disclosure of Invention
The invention provides a method and a device for acquiring an ROI (region of interest) of a finger vein image and related equipment, which aim to solve the problem of low detection efficiency of the method for acquiring the ROI of the finger vein image in the prior art.
The present invention is achieved in this way, and a first embodiment of the present invention provides a method for acquiring an ROI region of a finger vein image, including:
acquiring a finger vein image;
preprocessing a finger vein image to obtain a gradient image;
extracting a plurality of first sections from the gradient image, and respectively acquiring rough boundary positions of fingers in the plurality of first sections;
obtaining a reference boundary coordinate of the finger in the gradient image according to the rough boundary position;
extracting a plurality of second sections from the gradient image, and respectively acquiring reference average boundary coordinates of fingers in the plurality of second sections according to the reference boundary coordinates;
and fitting the reference average boundary coordinates of the plurality of second sections to obtain the actual boundary of the finger, and taking the region formed by the actual boundary as the ROI region.
A second embodiment of the present invention provides an apparatus for acquiring an ROI region of a finger vein image, including:
the finger vein image acquisition module is used for acquiring a finger vein image;
the gradient image acquisition module is used for preprocessing the finger vein image to obtain a gradient image;
the rough boundary position acquisition module is used for extracting a plurality of first sections from the gradient image and respectively acquiring rough boundary positions of fingers in the plurality of first sections;
the reference boundary coordinate acquisition module is used for acquiring reference boundary coordinates of fingers in the gradient image according to the rough boundary position;
the reference average boundary coordinate acquisition module is used for extracting a plurality of second sections from the gradient image and respectively acquiring reference average boundary coordinates of fingers in the plurality of second sections according to the reference boundary coordinates;
and the ROI area acquisition module is used for fitting the reference average boundary coordinates of the plurality of second sections to obtain the actual boundary of the finger, and the area formed by the actual boundary is used as the ROI area.
A third embodiment of the present invention provides a control unit, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method for acquiring the ROI region of the finger vein image according to the first embodiment of the present invention when executing the computer program.
A fourth embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method for acquiring the ROI region of the finger vein image provided in the first embodiment of the present invention.
The invention provides a method, a device and related equipment for acquiring an ROI (region of interest) of a finger vein image. The method comprises the steps of obtaining a rough boundary position of a finger, obtaining a reference boundary coordinate of the finger according to the rough boundary position of the finger, obtaining a reference average boundary coordinate of the finger according to the reference boundary coordinate, fitting the reference average boundary coordinate to obtain an actual boundary coordinate of the finger, obtaining an actual boundary of the finger, obtaining an ROI (region of interest) in a finger vein image, effectively reducing interference caused by uneven electronic clutter and illumination, simultaneously enhancing the characteristics of a finger contour in a gradient image, greatly improving the identification accuracy of the ROI in the finger vein image, and solving the problem of low detection efficiency of an obtaining method of the ROI in the finger vein image in the prior art.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic application environment diagram of a method for acquiring an ROI region of a finger vein image according to a first embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for acquiring an ROI region of a finger vein image according to a first embodiment of the present invention;
fig. 3 is a schematic flowchart of step 12 in the method for acquiring the ROI region of the finger vein image according to the first embodiment of the present invention;
fig. 4 is a schematic flowchart of step 13 in the method for acquiring the ROI region of the finger vein image according to the first embodiment of the present invention;
FIG. 5 is a schematic flowchart of step 134 of the method for acquiring an ROI area of an image of a finger vein according to the first embodiment of the present invention;
fig. 6 is a schematic flowchart of step 14 in the method for acquiring the ROI region of the finger vein image according to the first embodiment of the present invention;
fig. 7 is a flowchart illustrating step 15 in the method for acquiring the ROI region of the finger vein image according to the first embodiment of the present invention;
fig. 8 is a block diagram of an apparatus for acquiring a ROI region of a finger vein image according to a second embodiment of the present invention;
fig. 9 is a block diagram of a computer device according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for acquiring the ROI of the finger vein image according to the first embodiment of the present invention can be applied to an application environment as shown in fig. 1, in which an acquisition device communicates with a server. The method comprises the steps that an acquisition device acquires a finger vein image, the finger vein image is sent to a server, the server acquires the finger vein image, the finger vein image is preprocessed to obtain a gradient image, then a plurality of first sections are extracted from the gradient image, rough boundary positions of fingers in the first sections are respectively acquired, reference boundary coordinates of the fingers in the gradient image are obtained according to the rough boundary positions, then a plurality of second sections are extracted from the gradient image, reference average boundary coordinates of the fingers in the second sections are respectively acquired according to the reference boundary coordinates, finally the reference average boundary coordinates of the second sections are fitted to obtain actual boundaries of the fingers, and a region formed by the actual boundaries is used as an ROI region. Wherein, the collection equipment can be the shooting equipment who possesses the camera. The server can be a device with image data processing capability, and the server can be implemented by an independent server or a server cluster consisting of a plurality of servers.
It should be noted that fig. 1 only shows an application scenario of the present embodiment, and the acquisition device may also be an intelligent device that acquires an image containing finger vein pattern distribution information from the shooting device, which is not limited herein.
In the embodiment of the present invention, as shown in fig. 2, a method for acquiring an ROI of a finger vein image is provided, which is described by taking the method applied to the server side in fig. 1 as an example, and includes the following steps 11 to 16.
Step 11: finger vein images are acquired.
The finger vein image may be an image containing finger vein grain distribution information, and the finger vein image in this embodiment may be photographed by using light that can penetrate through a skin layer, for example, near infrared light.
In addition, in the present embodiment, the pixel size of the finger vein image is not particularly limited, and for example, the finger vein image may be 320 × 240 pixel size.
Step 12: and preprocessing the finger vein image to obtain a gradient image.
Specifically, a gradient image is obtained according to the pixel values of the pixels in the finger vein image.
Further, as an implementation manner of this embodiment, as shown in fig. 3, the step 12 includes the following steps 121 to 123.
Step 121: and performing adjacent pixel smoothing processing on each pixel in the finger vein image to obtain a smooth value of each pixel in the horizontal direction of the finger vein image.
Specifically, each pixel is taken as a first central pixel, the first central pixel, a pixel before and a pixel after the first central pixel in the horizontal direction of the finger vein image form a three-dimensional image pixel vector group, and the three-dimensional image pixel vector group and the smooth vector are subjected to point multiplication to obtain a smooth value of the first central pixel in the horizontal direction of the finger vein image. It should be noted that each pixel needs to be regarded as the first central pixel, that is, the dot product operation described above should be performed a plurality of times to obtain a smooth value of each pixel in the horizontal direction of the finger vein image. In addition, the smoothing vector in the present embodiment may be a preset vector group, and may be set according to an empirical value.
To better explain the smooth value of each pixel in the horizontal direction of the finger vein image, the following smooth value of one pixel in the horizontal direction is taken as an example: the pixel value of one pixel A in the finger vein image is a, the pixel value of the previous pixel of the pixel A in the horizontal direction of the finger vein image is B, the pixel value of the pixel B is B, the pixel value of the next pixel of the pixel A is C, the pixel value of the pixel C is C, the pixel A, the pixel B and the pixel C form a three-dimensional image pixel vector group [ a, B, C ], the smooth vector is [1, 2, 1], and the three-dimensional image pixel vector group [ a, B, C ] and the smooth vector [1, 2, 1] are subjected to dot product operation to obtain the smooth value of the pixel A in the horizontal direction of the finger vein image. It should be noted that, in the above example, the three-dimensional image pixel vector group is formed by using the attribute of the pixel value of the pixel as an example, and the specific forming attribute in the three-dimensional image pixel vector group is not particularly limited in this embodiment.
Step 122: and calculating gradient values of the pixels in the vertical direction of the finger vein image according to the smooth values of the pixels.
Specifically, each pixel is taken as a second center pixel, a difference absolute value between smooth values of two pixels before and after the second center pixel in the vertical direction of the finger vein image is calculated, and the difference absolute value is taken as a gradient value of the second center pixel in the vertical direction of the finger vein image. It should be noted that each pixel is required to be the second central pixel, that is, the above calculation of absolute difference value should be performed a plurality of times to obtain the gradient value of each pixel in the vertical direction of the finger vein image.
Step 123: and obtaining a gradient image according to the gradient value of each pixel.
Specifically, the gradient values of the pixels obtained in step 122 in the vertical direction of the gradient image are formed into a gradient image.
Through the implementation of the above steps 121 to 123, the smooth value of each pixel in the horizontal direction can be calculated first, and then the gradient value in the vertical direction is calculated according to the smooth value in the horizontal direction, so as to obtain a corresponding gradient image.
Further, as an implementation manner of this embodiment, the following step 124 is included before the step 121.
Step 124: and carrying out logarithmic transformation on the finger vein image to obtain the finger vein image subjected to logarithmic transformation.
Specifically, the logarithmic transformation is performed on the pixel gray value of the pixel in the finger vein image through the following formula (1):
y=log(x) (1)
where x represents the pixel gray scale value and y represents the transformed pixel value. In the above formula (1), the base number of the logarithm is e (2.71828).
It is to be noted that, in the present embodiment, the pixel gradation value of each pixel needs to be logarithmically transformed, that is, it needs to be calculated by the above equation (1) a plurality of times to obtain the pixel value of each pixel.
In this embodiment, the low pixel gray scale values of a narrow range are stretched while the high pixel gray scale values of a wide range are compressed by performing a logarithmic transformation on each pixel. The enhancement of low-gray level details in the image is realized, the dark pixel value in the finger vein image can be expanded, the bright pixel value in the finger vein image is compressed, and the influence of uneven illumination of near infrared light caused by arrangement reasons is eliminated.
Step 13: a plurality of first sections are extracted from the gradient image, and rough boundary positions of the fingers in the plurality of first sections are respectively obtained.
Specifically, a plurality of first sections are extracted from the gradient image according to the horizontal direction of the gradient image, and the first sections are not overlapped.
Further, as an implementation manner of this embodiment, as shown in fig. 4, the step 13 includes the following steps 131 to 135.
Step 131: a plurality of first sections are extracted from the gradient image in a horizontal direction of the gradient image.
The wider the plurality of first segments cover in the gradient image, the larger the calculation amount in step 13.
For example, the range of the gradient image in the horizontal direction is [0, 320], and three first sections [0, 80], [120, 200], [240, 320] can be extracted from the horizontal direction of the gradient image.
Step 132: each first section is divided into a plurality of first subsections from the horizontal direction.
Specifically, each first section is equally divided into a plurality of first subsections in the horizontal direction of the gradient image.
For example, if the interval of the first section in the horizontal direction is [0, 80], the first section may be divided into four first sub-sections in the horizontal direction of the gradient image, where the four first sub-sections are [0, 20], [20, 40], [40, 60], [60, 80], respectively.
Step 133: and respectively calculating the gradient sum of each first subsection to obtain a first subsection vector corresponding to each first subsection.
Wherein the sum of the gradients of the first subsection is the sum of the gradient values in the horizontal direction in the first subsection. In addition, when the interval range of the finger vein image in the vertical direction is [0, 240], the dimension of the first region vector corresponding to each first subsection obtained after the gradient sum calculation of the first subsection in this step 133 in the vertical direction should be 240, and the dimension in the horizontal direction should be 1.
In the present embodiment, the gradient image has 240 dimensions in the vertical direction, which can be understood as 240 pixels in the vertical direction; the gradient image has 320 dimensions in the horizontal direction, which can be understood as 320 pixels in the horizontal direction.
It should be noted that the first sub-sections and the first region vectors in this embodiment should have a one-to-one correspondence.
Step 134: and smoothing each first region vector according to a preset smoothing mode to obtain a smooth gradient vector corresponding to the first section.
Wherein, each smooth gradient vector should correspond to each first segment one by one, and each smooth gradient vector represents a first segment respectively.
Further, as an implementation manner of this embodiment, as shown in fig. 5, the step 134 includes the following steps 1341 to 1343.
Step 1341: and performing dot multiplication on each first region vector corresponding to the first sub-region in the first part of the first section and a preset smooth vector to obtain a first smooth region vector corresponding to the first sub-region in the first part of the first section.
The first section first part comprises first subsection parts of the first section except for first subsections positioned at two sides of the first section. The preset smoothing vector may be determined based on empirical values. It should be noted that each first smooth region vector corresponds to a first sub-segment.
In this embodiment, the dot product processing on each first region vector corresponding to the first sub-region in the first portion of the first region and the preset smooth vector (for example, may be [1, 2, 1]) may specifically be a shift convolution of the first region vector corresponding to the first sub-region in the first portion of the first region and the preset smooth vector from left to right.
Step 1342: and respectively carrying out near-dimension average value processing on each first region vector corresponding to the first sub-region in the second part of the first region to obtain a second smooth region vector corresponding to the first sub-region in the second part of the first region.
The first section second part comprises first subsection parts positioned on two sides of the first section. Each first subsection is in one-to-one correspondence with the second smooth region vector. The near-dimensional averaging process may be a nearest neighbor 9-dimensional data smoothing process.
The performing of the near-dimension average processing on each first region vector corresponding to the first sub-region in the second part of the first region specifically includes: a neighbor 9-pixel mean calculation is performed on the sum of the gradients in the respective dimensions corresponding in the first region vector corresponding to the first sub-region in the second portion of the first section.
For example, one of the first region vectors is [ d1, d2, d3, … d240], and the fifth dimension value (d5) in the first region vector is subjected to neighboring 9-dimensional data smoothing processing to obtain d1 × d2 × d3 × d4 × d5 × d6 × d7 × d8 d 9/9.
That is, it is necessary to perform near-dimensional average processing on each dimensional value in the first subsection in order to obtain the second smooth region vector.
Step 1343: and carrying out self-multiplication and evolution processing on the first smooth region vector and the second smooth region vector in each first section to obtain a smooth gradient vector of each first section.
Wherein, the self-multiplying the first smooth region vector and the second smooth region vector in one of the first sections specifically comprises: and performing self-multiplication calculation on the dimension values in the same dimension in the horizontal direction in each first smooth vector and each second smooth vector to obtain a self-multiplication result of the dimension in the horizontal direction in the first section, performing continuous quadratic evolution calculation on the self-multiplication result to obtain a smooth gradient value in the dimension in the first section, and repeating the method to calculate the smooth gradient value in each dimension in the first section so as to obtain the smooth gradient vector of one first section. When the gradient image has a pixel size of 320 × 240, the smooth gradient vector corresponding to one first segment has a dimension of 1 in the horizontal direction and a dimension of 240 in the vertical direction.
Through the implementation of the steps 1341 to 1343, after the first sub-segments at different positions in each first segment are processed in different smoothing modes, the first segments are smoothed, thereby reducing clutter in the gradient image and effectively reducing the influence of electric signal interference and uneven illumination on the gradient image.
Step 135: and respectively searching the upper rough boundary position and the lower rough boundary position of the finger from each smooth gradient vector according to the vertical direction of the gradient image to obtain the rough boundary position of the finger in the first section.
Dividing each smooth gradient vector into two intervals equally from the vertical direction of the gradient image, and searching the maximum value of the smooth gradient vector in the vertical direction in one interval to obtain the upper rough boundary position of the finger; and searching the maximum value of the smooth gradient vector in the vertical direction in another interval to obtain the position of the lower rough boundary of the finger. The upper coarse boundary position corresponds to an upper boundary position of the finger and the lower coarse boundary position corresponds to a lower boundary position of the finger.
Since each smooth gradient vector corresponds to a first segment, two intervals of the smooth gradient vector are searched from the vertical direction, so that the coarse boundary position in the gradient image can be obtained, that is, each first segment corresponds to an upper coarse boundary position and a lower coarse boundary position respectively.
To more clearly illustrate the contents of step 135 above, the following example is listed: there is a smooth gradient vector of [ c (1), c (1), c (2), c (3), … c (240) ], which is divided into two intervals, respectively [ c (1), c (1), c (2), c (3), … c (120) ] and [ c (121), c (122), c (123), c (124), … c (240) ], inquiring the interval [ c (1), c (1), c (2), c (3), … c (120) ] to obtain the maximum value, using the pixel position of the dimension corresponding to the maximum value as the upper rough boundary position of the finger, and inquiring the intervals [ c (121), c (122), c (123), c (124), … c (240) ] to obtain the maximum value, and taking the pixel position of the dimension corresponding to the maximum value as the lower rough boundary position of the finger, thereby obtaining the rough boundary position of the finger in the smooth gradient vector.
It should be noted that the above-mentioned upper and lower coarse boundary positions of the search finger are performed in a spiral manner, that is, the search in the two sections is performed alternately. In addition, the dimension of the difference between the position of the upper coarse boundary and the lower coarse boundary in a smoothed gradient vector should not exceed a preset value, that is, the pixels of the difference between the position of the upper coarse boundary and the lower coarse boundary in a smoothed gradient vector should not exceed a preset value. When the gradient image is 320 x 240 pixels in size, the upper and lower coarse boundary positions in a smoothed gradient vector should not differ by more than 90 pixels.
Through the implementation of the above steps 131 to 135, a plurality of first subsections can be obtained in the gradient image, then each first area vector corresponding to each first subsection is obtained through gradient sum calculation, and the plurality of first area vectors corresponding to each first subsection are subjected to smoothing processing, so that the characteristic part of the outline of the finger in the gradient image is enlarged, the interference of the clutter is effectively reduced, meanwhile, the upper rough boundary position and the lower rough boundary position of the finger are searched in the smooth gradient vector, and according to the placement position of the finger in the finger vein image acquisition device, the rough boundary position of the finger is skillfully searched in two intervals respectively, so that the rough boundary position of the finger is obtained.
Step 14: and obtaining the reference boundary coordinates of the finger in the gradient image according to the rough boundary position.
Wherein the reference boundary coordinates are a further description of the boundary position of the finger relative to the rough boundary position.
Further, as an implementation manner of this embodiment, as shown in fig. 6, the step 14 includes the following steps 141 to 144.
Step 141: and obtaining the rough boundary mean coordinates of the gradient image according to the rough boundary positions of the fingers in the plurality of first sections.
Calculating the average value of the upper rough boundary positions of a plurality of first section fingers to obtain the upper rough boundary average value coordinate of the gradient image in the vertical direction; specifically, the average value of the upper rough boundary positions of a plurality of first section fingers is calculated, and the upper rough boundary average value coordinate of the gradient image in the vertical direction is obtained. The upper coarse boundary position corresponds to an upper boundary position of the finger and the lower coarse boundary position corresponds to a lower boundary position of the finger.
Step 142: and respectively expanding the pixels of the rough boundary mean value coordinates according to the vertical direction of the gradient image by taking the pixels of each rough boundary mean value coordinate as a center to obtain a first candidate vector corresponding to the rough boundary mean value coordinates.
Specifically, each pixel is taken as a third central pixel, and the third central pixel, a plurality of pixels before and a plurality of pixels after the third central pixel in the horizontal direction of the gradient image form a first candidate vector. Preferably, each pixel is taken as a third central pixel, and the third central pixel, the first 9 pixels and the last 9 pixels of the third central pixel in the horizontal direction of the gradient image constitute the first candidate vector.
In addition, it should be noted that, in step 142, the coarse boundary mean coordinate refers to a reference mean in the vertical direction, a straight line should be formed on the gradient image as an upper boundary reference of the finger according to an upper coarse boundary mean coordinate in the coarse boundary mean coordinate, and a straight line should be formed on the gradient image as a lower boundary reference of the finger according to a lower coarse boundary mean coordinate in the coarse boundary mean coordinate, so that the first candidate vector generated according to the upper coarse boundary mean should be plural, and the first candidate vector generated according to the lower coarse boundary mean should also be plural. Specifically, when the gradient image is 320 × 240 pixels in size, the first candidate vectors generated from the upper coarse boundary mean should be 320, and the first candidate vectors generated from the lower coarse boundary mean should also be 320.
Step 143: and respectively carrying out point multiplication operation on the first candidate vectors and a preset first Gaussian filter vector to obtain first enhancement candidate vectors corresponding to each first candidate vector.
Wherein the predetermined first gaussian filter vector may be determined based on empirical values.
Step 144: and respectively searching a maximum value in each first enhancement candidate vector, and taking the coordinate corresponding to the maximum value as the reference boundary coordinate of the gradient image.
It is noted that the reference boundary coordinates include an upper reference boundary coordinate corresponding to an upper boundary position of the finger and a lower reference boundary coordinate corresponding to a lower boundary position of the finger. According to the method of the above steps 141 to 144, the upper reference boundary coordinates and the lower reference boundary coordinates can be obtained.
Through the implementation of the steps 141 to 144, the gaussian filtering enhancement is performed on the gradient image, the reference boundary coordinate can be obtained according to the rough boundary position, and a more accurate finger boundary position can be judged according to the maximum value of the first enhancement candidate vector, so that the accuracy of obtaining the ROI area is effectively improved.
Step 15: and extracting a plurality of second sections from the gradient image, and respectively acquiring reference average boundary coordinates of the fingers in the plurality of second sections according to the reference boundary coordinates.
The reference average boundary coordinates comprise an upper reference average boundary and a lower reference average boundary, the upper reference average boundary corresponds to the upper boundary position of the finger, and the lower reference average boundary corresponds to the lower boundary position of the finger.
Further, as an implementation manner of this embodiment, as shown in fig. 7, the step 15 includes the following steps 151 to 156.
Step 151: a plurality of second sections are extracted from the gradient image according to the horizontal direction of the gradient image, wherein each adjacent second section is partially overlapped.
For example, when the gradient image has a pixel size of 320 × 240, the gradient image is divided into 32 consecutive second segments from the horizontal direction, each second segment contains 20 pixels in the horizontal direction, and any two adjacent second segments contain 10 overlapping pixels.
Step 152: and respectively calculating the gradient sum of each second section to obtain a second region vector corresponding to each second section.
Specifically, the sum of gradients of each second section in the horizontal dimension is calculated, so as to obtain a second region vector corresponding to each second section. When the gradient image is 320 × 240 pixel size, each second region vector is 240 dimensions in the horizontal direction, and each second region vector is 1 dimension in the vertical direction.
Step 153: and smoothing each second region vector to obtain a smoothed third smooth region vector.
Specifically, each second region vector and a preset smooth vector are subjected to point multiplication to obtain a third smooth region vector after the smoothing. For example, the preset smoothing vector may be [1, 2, 1 ].
Step 154: and expanding the elements in the third sliding region vector in which the reference boundary coordinates are located according to the vertical direction of the third sliding region vector by taking the elements in the third sliding region vector in which the reference boundary coordinates are located as the center to obtain a second candidate vector corresponding to the third sliding region vector.
Wherein each element in the third sliding region vector represents a respective dimension of the third sliding region vector in the vertical direction.
Specifically, each element is taken as a central element, and the central element, a plurality of front elements and a plurality of rear elements of the central element in the horizontal direction of the gradient image form a first candidate vector. Preferably, each element is taken as a central element, and the central element, the first 9 elements of the central element in the horizontal direction of the gradient image and the last 9 elements constitute the first candidate vector.
Step 155: and respectively carrying out point multiplication operation on the second candidate vectors and a preset second Gaussian filter vector to obtain second enhanced candidate vectors corresponding to each second candidate vector.
Wherein the predetermined first gaussian filter vector may be determined based on empirical values.
Step 156: and respectively searching a maximum value in each second enhancement candidate vector, and taking the coordinate corresponding to the maximum value as the reference average boundary coordinate of the finger in the second section.
It is noted that by performing the above-described repeated loop of step 156, the upper reference average boundary coordinate and the lower reference average boundary coordinate in each second segment can be obtained. When the gradient image has a pixel size of 320 × 240 and 32 second sections are divided, since the reference average boundary coordinate includes the upper reference average boundary coordinate and the lower reference average boundary coordinate, the above step 156 needs to be repeated 64 times.
Through the implementation of the above steps 151 to 156, the reference average boundary coordinates of the finger in the plurality of second segments can be respectively obtained from the reference boundary coordinates, the boundary position of the finger can be represented by using the reference boundary coordinates, the finger contour features can be enlarged, and the accuracy of identifying the ROI region can be improved.
Step 16: and fitting the reference average boundary coordinates of the plurality of second sections to obtain the actual boundary of the finger, and taking the region formed by the actual boundary as the ROI region.
Specifically, the actual boundary of the finger is obtained by linearly fitting the plurality of upper-reference average boundary coordinates obtained in step 15, the actual boundary of the finger is obtained by linearly fitting the plurality of lower-reference average boundary coordinates obtained in step 15, the actual boundary of the finger is obtained, and a region formed by the actual boundary of the finger and the actual boundary of the finger is used as the ROI region.
Specifically, the coordinate where the actual boundary of the finger is located may be obtained by fitting the reference average boundary coordinates of the plurality of second segments by the following equation (2):
Y(i,j)=(D(i)*(m-j)+D(i+1)*j+5)/m (2)
wherein d (i) represents the reference average boundary coordinate of the ith second segment, i represents the index subscript of the reference average boundary corresponding to the plurality of second segments, j takes a value range [0, m ], m represents the dimension number of the horizontal direction of the overlapped part in each second segment, and Y represents the real boundary coordinate at the position corresponding to (i, j). Wherein i is less than or equal to the number of second segments.
Taking the gradient image of 320 × 240 pixels in this embodiment, the gradient map is divided into 32 continuous second segments from the horizontal direction, each second segment includes 20 pixels in the horizontal direction, and any two adjacent second segments include 10 overlapping pixels, where m is 10 and i is 32.
Through the implementation of the steps 11 to 16, the rough boundary position of the finger can be obtained, then the reference boundary coordinate of the finger is obtained according to the rough boundary position of the finger, then the reference average boundary coordinate of the finger is obtained according to the reference boundary coordinate, and finally the reference average boundary coordinate is fitted to obtain the actual boundary coordinate of the finger, so that the actual boundary of the finger is obtained, the ROI area in the finger vein image is obtained, the interference caused by uneven electronic clutter and illumination is effectively reduced, meanwhile, the characteristics of the finger contour in the gradient image are enhanced, the identification accuracy of the ROI area in the finger vein image is greatly improved, and the problem that the detection efficiency is low in the method for obtaining the ROI area of the finger vein image in the prior art is solved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Further, as shown in fig. 8, the apparatus for acquiring the ROI region of the finger vein image includes: a finger vein image acquisition module 41, a gradient image acquisition module 42, a rough boundary position acquisition module 43, a reference boundary coordinate acquisition module 44, a reference average boundary coordinate acquisition module 45, and an ROI region acquisition module 46. The functional modules are explained in detail as follows:
a finger vein image acquisition module 41 for acquiring a finger vein image;
the gradient image acquisition module 42 is configured to perform preprocessing on the finger vein image to obtain a gradient image;
a rough boundary position obtaining module 43, configured to extract a plurality of first segments from the gradient image, and obtain rough boundary positions of fingers in the plurality of first segments respectively;
a reference boundary coordinate obtaining module 44, configured to obtain a reference boundary coordinate of a finger in the gradient image according to the rough boundary position;
a reference average boundary coordinate obtaining module 45, configured to extract a plurality of second segments from the gradient image, and obtain reference average boundary coordinates of fingers in the plurality of second segments according to the reference boundary coordinates;
and an ROI region obtaining module 46, configured to fit the reference average boundary coordinates of the plurality of second segments to obtain an actual boundary of the finger, and use a region formed by the actual boundary as an ROI region.
Further, as an implementation of the present embodiment, the gradient image acquisition module 42 includes a smooth value acquisition unit, a gradient value acquisition unit, and a gradient image acquisition unit. The functional units are explained in detail as follows:
the smooth value acquisition unit is used for performing adjacent pixel smoothing processing on each pixel in the gradient image to obtain a smooth value of each pixel in the horizontal direction of the finger vein image;
a gradient value acquisition unit for calculating gradient values of the respective pixels in a vertical direction of the finger vein image based on the smoothed values of the respective pixels;
and the gradient image acquisition unit is used for obtaining a gradient image according to the gradient value of each pixel.
Further, as an implementation manner of the present embodiment, the gradient image obtaining module 42 further includes a logarithmic transformation processing unit. The log transform processing unit is described in detail as follows:
and the logarithmic transformation processing unit is used for carrying out logarithmic transformation on the finger vein image to obtain the finger vein image subjected to the logarithmic transformation.
Further, as an implementation manner of the present embodiment, the rough boundary position acquisition module 43 includes a first section acquisition unit, a first sub-section acquisition unit, a first region vector acquisition unit, a smooth gradient vector acquisition unit, and a rough boundary position acquisition unit. The functional units are explained in detail as follows:
a first section acquisition unit for extracting a plurality of first sections from the gradient image in a horizontal direction of the gradient image;
a first subsection obtaining unit for dividing each first subsection into a plurality of first subsections from a horizontal direction;
the first region vector acquisition unit is used for respectively calculating the gradient sum of each first subsection to obtain a first region vector corresponding to each first subsection;
the smooth gradient vector acquisition unit is used for carrying out smoothing processing on each first region vector according to a preset smoothing mode to obtain a smooth gradient vector corresponding to the first section;
and the rough boundary position acquisition unit is used for respectively searching the upper rough boundary position and the lower rough boundary position of the finger from each smooth gradient vector according to the vertical direction of the gradient image to obtain the rough boundary position of the finger in the first section.
Further, as an implementation manner of the present embodiment, the smooth gradient vector acquisition unit includes a first smooth region vector acquisition subunit, a second smooth region vector acquisition subunit, and a smooth gradient vector acquisition subunit. The functional subunits are described in detail as follows:
the first smooth area vector acquiring subunit is configured to perform dot product processing on each first area vector corresponding to the first sub-area in the first portion of the first area and a preset smooth vector, respectively, to obtain a first smooth area vector corresponding to the first sub-area in the first portion of the first area;
the second smooth area vector acquiring subunit is used for respectively carrying out near-dimension average value processing on each first area vector corresponding to the first sub-area of the second part of the first area to obtain a second smooth area vector corresponding to the first sub-area of the second part of the first area;
and the smooth gradient vector acquisition subunit is used for performing self-multiplication and evolution processing on the first smooth region vector and the second smooth region vector in each first section to obtain a smooth gradient vector of each first section.
Further, as an implementation manner of the present embodiment, the reference boundary coordinate acquisition module 44 includes a rough boundary mean coordinate acquisition unit, a first candidate vector acquisition unit, a first enhancement candidate vector acquisition unit, and a reference boundary coordinate acquisition unit. The functional units are explained in detail as follows:
the rough boundary mean coordinate acquisition unit is used for acquiring rough boundary mean coordinates of the gradient image according to rough boundary positions of fingers in the plurality of first sections;
the first candidate vector acquisition unit is used for expanding the pixels of the rough boundary mean coordinates according to the vertical direction of the gradient image by taking the pixels of the rough boundary mean coordinates as the center to obtain first candidate vectors corresponding to the rough boundary mean coordinates;
the first enhancement candidate vector acquisition unit is used for respectively carrying out point multiplication on the first candidate vectors and a preset first Gaussian filter vector to obtain first enhancement candidate vectors corresponding to each first candidate vector;
and the reference boundary coordinate acquisition unit is used for searching a maximum value in each first enhancement candidate vector and taking the coordinate corresponding to the maximum value as the reference boundary coordinate of the gradient image.
Further, as an implementation manner of the present embodiment, the reference average boundary coordinate acquiring module 45 includes a second section acquiring unit, a second section vector acquiring unit, a third smooth section vector acquiring unit, a second candidate vector acquiring unit, a second enhancement candidate vector acquiring unit, and a reference average boundary coordinate acquiring unit. The functional units are explained in detail as follows:
a second section acquiring unit, configured to extract a plurality of second sections from the gradient image according to a horizontal direction of the gradient image, where each adjacent second section is partially overlapped;
the second region vector acquisition unit is used for respectively calculating the gradient sum of each second section to obtain second region vectors corresponding to each second section;
a third smooth region vector obtaining unit, configured to perform a smoothing process on each second region vector to obtain a smoothed third smooth region vector;
a second candidate vector obtaining unit, configured to expand the elements in the third sliding region vector in which the reference boundary coordinate is located according to a vertical direction of the third sliding region vector, with the elements in the third sliding region vector in which the reference boundary coordinate is located as a center, and obtain a second candidate vector corresponding to the third sliding region vector;
a second enhanced candidate vector obtaining unit, configured to perform a point multiplication operation on the second candidate vectors and a preset second gaussian filter vector respectively to obtain second enhanced candidate vectors corresponding to each second candidate vector;
and the reference average boundary coordinate acquisition unit is used for searching a maximum value in each second enhancement candidate vector and taking the coordinate corresponding to the maximum value as the reference average boundary coordinate of the finger in the second section.
A second embodiment of the present invention provides an apparatus for acquiring an ROI region of a finger vein image, which corresponds one-to-one to the above-described method for acquiring an ROI region of a finger vein image.
For specific definition of the device for acquiring the ROI region of the finger vein image, reference may be made to the above definition of the method for acquiring the ROI region of the finger vein image, and details thereof are not repeated here. The modules/units in the above mentioned device for acquiring the ROI region of the finger vein image can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
A third embodiment of the present invention provides a computer device, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data involved in the acquisition method of the ROI area of the finger vein image. The network interface of the computer device is used for communicating with an external terminal through a network connection.
According to an embodiment of the present application, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method for acquiring the ROI region of the finger vein image, such as steps 11 to 16 shown in fig. 2, steps 121 to 123 shown in fig. 3, steps 131 to 135 shown in fig. 4, steps 1341 to 343 shown in fig. 5, steps 141 to 144 shown in fig. 6, and steps 151 to 156 shown in fig. 7.
A fourth embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for acquiring an ROI region of a finger vein image provided by an embodiment of the present invention, such as steps 11 to 16 shown in fig. 2, steps 121 to 123 shown in fig. 3, steps 131 to 135 shown in fig. 4, steps 1341 to 343 shown in fig. 5, steps 141 to 144 shown in fig. 6, and steps 151 to 156 shown in fig. 7. Alternatively, the computer program, when executed by a processor, implements the functions of the modules/units of the method for acquiring an ROI region of a finger vein image provided in the first embodiment described above. To avoid repetition, further description is omitted here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for acquiring an ROI (region of interest) of a finger vein image is characterized by comprising the following steps of:
acquiring a finger vein image;
preprocessing the finger vein image to obtain a gradient image;
extracting a plurality of first sections from the gradient image, and respectively acquiring rough boundary positions of fingers in the plurality of first sections;
obtaining a reference boundary coordinate of the finger in the gradient image according to the rough boundary position;
extracting a plurality of second sections from the gradient image, and respectively acquiring reference average boundary coordinates of the fingers in the plurality of second sections according to the reference boundary coordinates;
fitting the reference average boundary coordinates of the plurality of second sections to obtain the actual boundary of the finger, and taking the region formed by the actual boundary as the ROI region.
2. The method for acquiring the ROI region of the finger vein image according to claim 1, wherein the preprocessing the finger vein image to obtain the gradient image includes:
performing adjacent pixel smoothing processing on each pixel in the gradient image to obtain a smoothing value of each pixel in the horizontal direction of the finger vein image;
calculating a gradient value of each pixel in a vertical direction of the finger vein image according to the smooth value of each pixel;
and obtaining the gradient image according to the gradient value of each pixel.
3. The method according to claim 2, wherein the obtaining of the ROI region of the finger vein image comprises, before performing the neighboring pixel smoothing on each pixel in the gradient image to obtain the smoothing value of each pixel in the horizontal direction:
and carrying out logarithmic transformation on the finger vein image to obtain the finger vein image subjected to logarithmic transformation.
4. The method according to claim 1, wherein the extracting a plurality of first segments from the gradient image, and the obtaining rough boundary positions of the finger in the plurality of first segments respectively comprises:
extracting a plurality of first sections from the gradient image according to the horizontal direction of the gradient image;
dividing each first section into a plurality of first subsections from the horizontal direction;
respectively calculating the gradient sum of each first subsection to obtain a first region vector corresponding to each first subsection;
smoothing each first region vector according to a preset smoothing mode to obtain a smooth gradient vector corresponding to a first section;
and respectively searching the upper rough boundary position and the lower rough boundary position of the finger from each smooth gradient vector according to the vertical direction of the gradient image to obtain the rough boundary position of the finger in the first section.
5. The method according to claim 4, wherein the smoothing each of the first region vectors according to a preset smoothing manner to obtain a smoothed gradient vector corresponding to the first segment comprises:
performing dot product processing on each first region vector corresponding to the first sub-region of the first part of the first section and a preset smooth vector to obtain a first smooth region vector corresponding to the first sub-region of the first part of the first section;
respectively carrying out near-dimension average value processing on each first region vector corresponding to the first sub-region of the second part of the first region to obtain a second smooth region vector corresponding to the first sub-region of the second part of the first region;
and carrying out self-multiplication and evolution processing on the first smooth region vector and the second smooth region vector in each first section to obtain a smooth gradient vector of each first section.
6. The method for obtaining the ROI area of the finger vein image according to claim 1, wherein the obtaining of the reference boundary coordinates of the finger in the gradient image according to the rough boundary position comprises:
obtaining rough boundary mean coordinates of the gradient image according to rough boundary positions of fingers in a plurality of the first sections;
expanding the pixels of the rough boundary mean coordinate according to the vertical direction of the gradient image by taking the pixels of the rough boundary mean coordinate as the center to obtain a first candidate vector corresponding to the rough boundary mean coordinate;
respectively carrying out point multiplication operation on the first candidate vectors and a preset first Gaussian filter vector to obtain first enhancement candidate vectors corresponding to each first candidate vector;
and respectively searching a maximum value in each first enhancement candidate vector, and taking the coordinate corresponding to the maximum value as a reference boundary coordinate of the gradient image.
7. The method of claim 1, wherein the extracting a plurality of second segments from the gradient image and the obtaining the reference mean boundary coordinates of the finger in the plurality of second segments according to the reference boundary coordinates respectively comprises:
extracting a plurality of second sections from the gradient image according to the horizontal direction of the gradient image, wherein the adjacent second sections are partially overlapped;
respectively calculating the gradient sum of each second section to obtain a second region vector corresponding to each second section;
smoothing each second region vector to obtain a third smoothed region vector;
expanding the elements in the third sliding region vector in which the reference boundary coordinate is located according to the vertical direction of the third sliding region vector by taking the elements in the third sliding region vector in which the reference boundary coordinate is located as the center, so as to obtain a second candidate vector corresponding to the third sliding region vector;
performing point multiplication operation on the second candidate vectors and a preset second Gaussian filter vector respectively to obtain second enhanced candidate vectors corresponding to each second candidate vector;
and respectively searching a maximum value in each second enhancement candidate vector, and taking the coordinate corresponding to the maximum value as the reference average boundary coordinate of the finger in the second section.
8. An apparatus for acquiring a region of interest (ROI) of a finger vein image, comprising:
the finger vein image acquisition module is used for acquiring a finger vein image;
the gradient image acquisition module is used for preprocessing the finger vein image to obtain a gradient image;
the rough boundary position acquisition module is used for extracting a plurality of first sections from the gradient image and respectively acquiring rough boundary positions of fingers in the first sections;
a reference boundary coordinate obtaining module, configured to obtain a reference boundary coordinate of the finger in the gradient image according to the rough boundary position;
a reference average boundary coordinate obtaining module, configured to extract a plurality of second segments from the gradient image, and obtain reference average boundary coordinates of the finger in the plurality of second segments according to the reference boundary coordinates;
and the ROI area acquisition module is used for fitting the reference average boundary coordinates of the plurality of second sections to obtain the actual boundary of the finger, and an area formed by the actual boundary is used as the ROI area.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method for acquiring a ROI region of a finger vein image according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of acquiring a ROI region of a finger vein image according to any one of claims 1 to 7.
CN202010567319.8A 2020-06-19 2020-06-19 Method and device for acquiring ROI (region of interest) of finger vein image and related equipment Active CN111797828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010567319.8A CN111797828B (en) 2020-06-19 2020-06-19 Method and device for acquiring ROI (region of interest) of finger vein image and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010567319.8A CN111797828B (en) 2020-06-19 2020-06-19 Method and device for acquiring ROI (region of interest) of finger vein image and related equipment

Publications (2)

Publication Number Publication Date
CN111797828A true CN111797828A (en) 2020-10-20
CN111797828B CN111797828B (en) 2023-04-07

Family

ID=72803974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010567319.8A Active CN111797828B (en) 2020-06-19 2020-06-19 Method and device for acquiring ROI (region of interest) of finger vein image and related equipment

Country Status (1)

Country Link
CN (1) CN111797828B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102209A (en) * 2020-11-17 2020-12-18 四川圣点世纪科技有限公司 Abnormal vein image restoration method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310196A (en) * 2013-06-13 2013-09-18 黑龙江大学 Finger vein recognition method by interested areas and directional elements
CN104933432A (en) * 2014-03-18 2015-09-23 北京思而得科技有限公司 Processing method for finger pulp crease and finger vein images
CN108280448A (en) * 2017-12-29 2018-07-13 北京智慧眼科技股份有限公司 The method of discrimination and device of finger intravenous pressing figure refer to vein identification method
CN108830158A (en) * 2018-05-16 2018-11-16 天津大学 The vein area-of-interest exacting method that finger contours and gradient distribution blend
CN110163119A (en) * 2019-04-30 2019-08-23 中国地质大学(武汉) A kind of finger vein identification method and system
CN110348289A (en) * 2019-05-27 2019-10-18 广州中国科学院先进技术研究所 A kind of finger vein identification method based on binary map
CN110909631A (en) * 2019-11-07 2020-03-24 黑龙江大学 Finger vein image ROI extraction and enhancement method
KR20200041636A (en) * 2018-10-12 2020-04-22 전자부품연구원 Method for Extracting Biometric Information Pattern using a Fingerprint and Finger Vein

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310196A (en) * 2013-06-13 2013-09-18 黑龙江大学 Finger vein recognition method by interested areas and directional elements
CN104933432A (en) * 2014-03-18 2015-09-23 北京思而得科技有限公司 Processing method for finger pulp crease and finger vein images
CN108280448A (en) * 2017-12-29 2018-07-13 北京智慧眼科技股份有限公司 The method of discrimination and device of finger intravenous pressing figure refer to vein identification method
CN108830158A (en) * 2018-05-16 2018-11-16 天津大学 The vein area-of-interest exacting method that finger contours and gradient distribution blend
KR20200041636A (en) * 2018-10-12 2020-04-22 전자부품연구원 Method for Extracting Biometric Information Pattern using a Fingerprint and Finger Vein
CN110163119A (en) * 2019-04-30 2019-08-23 中国地质大学(武汉) A kind of finger vein identification method and system
CN110348289A (en) * 2019-05-27 2019-10-18 广州中国科学院先进技术研究所 A kind of finger vein identification method based on binary map
CN110909631A (en) * 2019-11-07 2020-03-24 黑龙江大学 Finger vein image ROI extraction and enhancement method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102209A (en) * 2020-11-17 2020-12-18 四川圣点世纪科技有限公司 Abnormal vein image restoration method and device

Also Published As

Publication number Publication date
CN111797828B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111860670A (en) Domain adaptive model training method, image detection method, device, equipment and medium
US8280196B2 (en) Image retrieval apparatus, control method for the same, and storage medium
CN108875621B (en) Image processing method, image processing device, computer equipment and storage medium
EP1760636B1 (en) Ridge direction extraction device, ridge direction extraction method, ridge direction extraction program
CN111797828B (en) Method and device for acquiring ROI (region of interest) of finger vein image and related equipment
CN110472498A (en) Identity identifying method, system, storage medium and equipment based on hand-characteristic
CN113034497A (en) Vision-based thermos cup weld positioning detection method and system
US10740590B2 (en) Skin information processing method, skin information processing device, and non-transitory computer-readable medium
CN116385745A (en) Image recognition method, device, electronic equipment and storage medium
CN108876776B (en) Classification model generation method, fundus image classification method and device
CN111898408B (en) Quick face recognition method and device
CN114930402A (en) Point cloud normal vector calculation method and device, computer equipment and storage medium
JP2017010419A (en) Information processing program and information processing device
CN111881789A (en) Skin color identification method and device, computing equipment and computer storage medium
CN109800702B (en) Quick comparison method for finger vein identification and computer readable storage medium
JPWO2015068417A1 (en) Image collation system, image collation method and program
CN114782715B (en) Vein recognition method based on statistical information
CN116091596A (en) Multi-person 2D human body posture estimation method and device from bottom to top
CN111079551B (en) Finger vein recognition method and device based on singular value decomposition and storage medium
CN111968087B (en) Plant disease area detection method
CN112308044B (en) Image enhancement processing method and palm vein identification method for palm vein image
CN111178202B (en) Target detection method, device, computer equipment and storage medium
KR20180072517A (en) Method for detecting borderline between iris and sclera
CN113724237A (en) Tooth mark recognition method and device, computer equipment and storage medium
CN112395988A (en) Finger vein recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Wisdom Eye Technology Co.,Ltd.

Country or region after: China

Address before: 410000 building 14, phase I, Changsha Zhongdian Software Park, No. 39, Jianshan Road, high tech Development Zone, Changsha City, Hunan Province

Patentee before: Wisdom Eye Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address