CN112784837B - Region of interest extraction method and device, electronic equipment and storage medium - Google Patents

Region of interest extraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112784837B
CN112784837B CN202110107665.2A CN202110107665A CN112784837B CN 112784837 B CN112784837 B CN 112784837B CN 202110107665 A CN202110107665 A CN 202110107665A CN 112784837 B CN112784837 B CN 112784837B
Authority
CN
China
Prior art keywords
finger
image
edge
points
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110107665.2A
Other languages
Chinese (zh)
Other versions
CN112784837A (en
Inventor
宋丹
姚琼
李文生
李华川
戴坤龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Productivity Promotion Center Of Xiaolan Town Zhongshan City
University of Electronic Science and Technology of China Zhongshan Institute
Original Assignee
Productivity Promotion Center Of Xiaolan Town Zhongshan City
University of Electronic Science and Technology of China Zhongshan Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Productivity Promotion Center Of Xiaolan Town Zhongshan City, University of Electronic Science and Technology of China Zhongshan Institute filed Critical Productivity Promotion Center Of Xiaolan Town Zhongshan City
Priority to CN202110107665.2A priority Critical patent/CN112784837B/en
Publication of CN112784837A publication Critical patent/CN112784837A/en
Application granted granted Critical
Publication of CN112784837B publication Critical patent/CN112784837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application provides a method and a device for extracting a region of interest, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a finger vein image of an identity person to be authenticated; performing convolution operation on the finger vein image to obtain a convolution result image; performing binarization operation on the convolution result image to obtain a binarized image; carrying out refinement operation on the binarized image to obtain a refined image; carrying out curve tracking on the refined image to obtain a finger edge curve; and image segmentation is carried out on the finger vein image according to the finger edge curve, and an interested region of the finger vein image is obtained. In the implementation process, the precision of obtaining the finger edge curve is effectively improved by carrying out convolution operation, binarization operation, refinement operation and curve tracking on the finger vein image, and then the finger vein image is subjected to image segmentation by using the high-precision finger edge curve, so that the precision of the region of interest for obtaining the finger vein image is improved.

Description

Region of interest extraction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing and image recognition technologies, and in particular, to a method and apparatus for extracting a region of interest, an electronic device, and a storage medium.
Background
The region of interest (Region Of Interest, ROI) refers to a region capable of reflecting identity features, i.e. features extracted from the region of interest can be used for identity authentication or identification, and the main function of the region of interest extraction is to extract the region capable of reflecting identity features and remove the region irrelevant to the identity features so as to prevent the region irrelevant to the identity features from interfering with the accuracy of identity authentication.
Currently, edge-based extraction methods are used to extract the region of interest in the finger vein image, where the edge-based extraction methods include, but are not limited to: an edge detection algorithm based on a Sobel operator, or an edge detection algorithm based on a Canny operator, and the like. In a specific practical process, it is found that due to the difference of the acquisition environment and equipment of the finger vein image, a plurality of edge lines appear in the background of the finger vein image, and the edge lines can interfere with the extraction of the finger edge, so that the accuracy of extracting the region of interest in the finger vein image with complex background by the edge extraction-based method is not high.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, an electronic device, and a storage medium for extracting a region of interest, which are used for improving the problem that the accuracy of extracting the region of interest in a finger vein image with a complex background is not high.
The embodiment of the application provides a region of interest extraction method, which comprises the following steps: acquiring a finger vein image of an identity person to be authenticated; performing convolution operation on the finger vein image to obtain a convolution result image; performing binarization operation on the convolution result image to obtain a binarized image; carrying out refinement operation on the binarized image to obtain a refined image; carrying out curve tracking on the refined image to obtain a finger edge curve; and image segmentation is carried out on the finger vein image according to the finger edge curve, and an interested region of the finger vein image is obtained. In the implementation process, the precision of obtaining the finger edge curve is effectively improved by carrying out convolution operation, binarization operation, refinement operation and curve tracking on the finger vein image, and then the finger vein image is subjected to image segmentation by using the high-precision finger edge curve, so that the precision of the region of interest for obtaining the finger vein image is improved.
Optionally, in an embodiment of the present application, curve tracking is performed on the thinned image to obtain a finger edge curve, including: locating a plurality of finger edge points in the refined image; and performing checksum fitting on the plurality of finger edge points to obtain a finger edge curve. In the implementation process, by locating the plurality of finger edge points in the thinned image and then carrying out checksum fitting on the plurality of finger edge points, larger errors in curve fitting can be effectively found by using the verification operation, so that the situation that errors of fitting out finger edge curves in finger vein images with complex backgrounds are larger is avoided, and the accuracy of fitting out finger edge curves in finger vein images with complex backgrounds is effectively improved.
Optionally, in an embodiment of the present application, locating a plurality of finger edge points in the refined image includes: removing illumination interference in the refined image to obtain a refined image after interference removal; a plurality of finger edge points are positioned from the refined image after the interference is removed. In the implementation process, after the illumination interference in the refined image is removed, the plurality of finger edge points are positioned from the refined image after the interference is removed, so that the probability that the finger edge points are positioned due to the influence of the illumination interference is reduced, and the accuracy of positioning the plurality of finger edge points is effectively improved.
Optionally, in an embodiment of the present application, the finger vein image includes: a finger upper edge region and a finger lower edge region; performing convolution operation on the finger vein image to obtain a convolution result image, wherein the convolution result image comprises: performing convolution operation on the upper edge region of the finger by using a first convolution template to obtain an upper edge convolution feature map; performing convolution operation on the lower edge region of the finger by using a second convolution template to obtain a lower edge convolution feature map, wherein the first convolution template and the second convolution template are symmetrical in the vertical direction; and combining the upper edge convolution feature map and the lower edge convolution feature map to obtain a convolution result image. In the implementation process, the first convolution template and the second convolution template which are symmetrical in the vertical direction are used for carrying out convolution operation on the upper edge region of the finger and the lower edge region of the finger respectively, so that the calculated amount is reduced, and the speed for obtaining the convolution result image is effectively improved.
Optionally, in an embodiment of the present application, performing a refinement operation on the binarized image includes: judging whether continuous pixel points in the vertical direction exist in the binarized image; if yes, the continuous pixel points are thinned into center points of the continuous pixel points. In the implementation process, the continuous pixel points are thinned to the center points of the continuous pixel points when the continuous pixel points in the vertical direction exist, so that the condition that the edge of the finger is shortened in the horizontal direction is avoided, and the accuracy of extracting the region of interest is effectively improved.
Optionally, in an embodiment of the present application, performing a binarization operation on the convolution result image includes: judging whether each pixel point value in the convolution result image is smaller than a preset threshold value or not; if yes, the pixel value is set to zero, otherwise, the pixel value is set to one.
Optionally, in an embodiment of the present application, after obtaining the region of interest of the finger vein image, the method further includes: extracting image features in the region of interest; calculating the similarity between the image features and a plurality of template features in a feature template library to obtain a plurality of similarities; and determining the identity information of the identity personnel to be authenticated according to the multiple similarities. In the implementation process, the identity information of the identity personnel to be authenticated is determined by using the region of interest with higher accuracy, so that the influence of the complicated background of the finger vein image on the accuracy of the identity authentication is avoided, and the accuracy of the identity authentication of the identity personnel to be authenticated is effectively improved.
The embodiment of the application also provides a device for extracting the region of interest, which comprises: the vein image acquisition module is used for acquiring a finger vein image of the identity personnel to be authenticated; the image convolution operation module is used for carrying out convolution operation on the finger vein image to obtain a convolution result image; the binary image obtaining module is used for carrying out binarization operation on the convolution result image to obtain a binarized image; the refined image obtaining module is used for carrying out refining operation on the binarized image to obtain a refined image; the edge curve obtaining module is used for carrying out curve tracking on the thinned image to obtain a finger edge curve; the region of interest obtaining module is used for carrying out image segmentation on the finger vein image according to the finger edge curve to obtain the region of interest of the finger vein image.
Optionally, in an embodiment of the present application, the edge curve obtaining module includes: the refined image positioning module is used for positioning a plurality of finger edge points in the refined image; and the curve checking and fitting module is used for checking and fitting a plurality of finger edge points to obtain a finger edge curve.
Optionally, in an embodiment of the present application, the refinement image positioning module includes: the illumination interference removing module is used for removing illumination interference in the refined image and obtaining the refined image after interference removal; and the finger edge point positioning module is used for positioning a plurality of finger edge points from the refined image after the interference is removed.
Optionally, in an embodiment of the present application, the finger vein image includes: a finger upper edge region and a finger lower edge region; an image convolution operation module, comprising: the upper edge convolution calculation module is used for carrying out convolution operation on the upper edge region of the finger by using the first convolution template to obtain an upper edge convolution feature map; the lower edge convolution calculation module is used for carrying out convolution operation on the lower edge region of the finger by using a second convolution template to obtain a lower edge convolution characteristic diagram, and the first convolution template and the second convolution template are symmetrical in the vertical direction; and the edge convolution merging module is used for merging the upper edge convolution characteristic image and the lower edge convolution characteristic image to obtain a convolution result image.
Optionally, in an embodiment of the present application, the refinement image obtaining module includes: the continuous pixel judging module is used for judging whether continuous pixel points in the vertical direction exist in the binarized image; and the continuous pixel refinement module is used for refining the continuous pixel points into central points of the continuous pixel points if the continuous pixel points in the vertical direction exist in the binarized image.
Optionally, in an embodiment of the present application, the binary image obtaining module includes: the pixel value judging module is used for judging whether each pixel value in the convolution result image is smaller than a preset threshold value or not; and the pixel value setting module is used for setting the pixel value to zero if the pixel value is smaller than a preset threshold value, or setting the pixel value to one if the pixel value is smaller than the preset threshold value.
Optionally, in an embodiment of the present application, the region of interest extraction device further includes: the image feature extraction module is used for extracting image features in the region of interest; the similarity calculation judging module is used for calculating the similarity between the image features and a plurality of template features in the feature template library to obtain a plurality of similarities; and the identity information authentication module is used for determining the identity information of the identity personnel to be authenticated according to the multiple similarities.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory storing machine-readable instructions executable by the processor to perform the method as described above when executed by the processor.
The present embodiments also provide a storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for extracting a region of interest according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a convolution template provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a change in image processing procedure according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a process of locating finger edge points and curve tracking according to an embodiment of the present application;
fig. 5 is a schematic flow chart of finger vein authentication according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a region of interest extraction device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before introducing the method for extracting the region of interest provided in the embodiments of the present application, some concepts involved in the embodiments of the present application are described first:
finger vein authentication (finger-vein personal identification), which is a novel biological feature recognition technology, uses vein distribution images in fingers to perform identity recognition.
It should be noted that, the method for extracting a region of interest provided in the embodiments of the present application may be executed by an electronic device, where the electronic device refers to a device terminal or a server having a function of executing a computer program, where the device terminal is for example: smartphones, personal computers (personal computer, PCs), tablet computers and personal digital assistants, etc., servers such as: an x86 server and a non-x 86 server, the non-x 86 server comprising: mainframe, mini-sized machine, etc.
Before introducing the region of interest extraction method provided in the embodiments of the present application, application scenarios to which the region of interest extraction method is applicable are introduced, where the application scenarios include, but are not limited to: the method for extracting the region of interest extracts the region of interest in the finger vein image, performs identity authentication, identity recognition and the like according to the extracted region of interest, and can enhance the functions of a security system or an access control system and the like according to the extracted region of interest.
Please refer to a flow chart of a method for extracting a region of interest provided in an embodiment of the present application shown in fig. 1; the main thought of the extraction method of the region of interest is that the edge lines in the background of the finger vein image are observed to be short in length, disordered in direction, unstable, irregular and the like, while the finger edge curves collected in the horizontal direction are observed to be long in length, approximately horizontal in direction and the like, and the edge lines in the background of the finger vein image and the finger edge curves can be well distinguished by curve tracking. Therefore, the precision of obtaining the finger edge curve is effectively improved by carrying out convolution operation, binarization operation, refinement operation and curve tracking on the finger vein image, and then the finger vein image is subjected to image segmentation by using the high-precision finger edge curve, so that the precision of an interested region of obtaining the finger vein image is improved; the region of interest extraction method may include:
Step S110: and acquiring a finger vein image of the identity personnel to be authenticated.
The method for acquiring the image of the middle finger vein in the step S110 includes: the first acquisition mode is to shoot the finger of the identity personnel to be authenticated by using a terminal device such as an infrared camera or a CCD camera to acquire a finger vein image; then the terminal equipment sends the finger vein image to the electronic equipment, then the electronic equipment receives the finger vein image sent by the terminal equipment, and the electronic equipment can store the finger vein image into a file system, a database or a mobile storage device; in the second acquisition mode, a finger vein image stored in advance is acquired, specifically for example: the finger vein image is obtained from a file system, or from a database, or from a removable storage device.
It will be appreciated that during the acquisition of the finger vein image, the acquired finger vein image may be divided differently depending on the orientation of the finger. If the finger is acquired laterally, the acquired finger vein image may include: a finger upper edge region and a finger lower edge region; if the finger is acquired longitudinally, the acquired finger vein image may include: a finger left edge region and a finger right edge region.
After step S110, step S120 is performed: and carrying out convolution operation on the finger vein image to obtain a convolution result image.
The above-mentioned implementation of step S120 is very various, including but not limited to the following:
in the first embodiment, if the finger is collected transversely, the upper edge and the lower edge of the finger are respectively subjected to convolution operation, and then the results of the convolution operation of the upper edge and the lower edge are combined to obtain a convolution result image. Please refer to the schematic diagram of the convolution template provided in the embodiment of the present application shown in fig. 2; the convolution templates in the figure include: a first convolution template and a second convolution template, wherein the first convolution template can be used for extracting convolution features of the upper edge of the finger, and the second convolution template can be used for extracting convolution features of the lower edge of the finger, and the first convolution template and the second convolution template are symmetrical in the vertical direction; the first embodiment described above may include:
step S121: and carrying out convolution operation on the upper edge region of the finger by using a first convolution template to obtain an upper edge convolution characteristic diagram.
Step S122: and carrying out convolution operation on the lower edge region of the finger by using a second convolution template to obtain a lower edge convolution characteristic diagram.
Please refer to fig. 3, which illustrates a schematic diagram of a change in an image processing procedure provided in an embodiment of the present application; the embodiments of the above steps S121 to S122 are, for example: cutting the finger vein image into an upper edge region of the finger and a lower edge region of the finger as shown in a subgraph in fig. 3 (a), and then carrying out convolution operation on the upper edge region of the finger by using a first convolution template to obtain an upper edge convolution feature map; performing convolution operation on the lower edge region of the finger by using a second convolution template to obtain a lower edge convolution feature map; wherein, the sizes of the first convolution template and the second convolution template can be 5×12,5 is the width of the convolution template, 12 is the length of the convolution template, and convolution templates with other sizes can be selected according to specific situations.
Step S123: and combining the upper edge convolution feature map and the lower edge convolution feature map to obtain a convolution result image.
The embodiment of step S123 described above is, for example: and (3) combining the upper edge convolution feature map and the lower edge convolution feature map in the vertical direction, wherein the obtained convolution result image is shown as a sub-graph (b) in fig. 3.
In the second embodiment, it can be understood that if the finger is longitudinally acquired, convolution operations are respectively performed on the left edge and the right edge of the finger, and then the left edge convolution feature map and the right edge convolution feature map are combined in the horizontal direction to obtain a convolution result image, and the embodiment is similar to the first embodiment, and only differs in that vein images acquired in different directions are processed.
After step S120, step S130 is performed: and performing binarization operation on the convolution result image to obtain a binarized image.
The embodiment of step S130 described above is, for example: judging whether each pixel point value in the convolution result image is smaller than a preset threshold value or not; if the pixel value is smaller than the preset threshold value, setting the pixel value to be zero, otherwise, setting the pixel value to be one; the preset threshold may be manually specified, for example, set to 100, 200, or 250, or the like, or may be a global adaptive threshold calculated using a program. After the above-described binarization operation is performed on the convolution result image, the obtained binarized image is shown as a sub-graph (c) in fig. 3.
After step S130, step S140 is performed: and carrying out refinement operation on the binarized image to obtain a refined image.
The above-mentioned implementation of step S140 is very various, including but not limited to the following:
in order to avoid shortening the edges in the horizontal direction, the first embodiment may perform a thinning operation in the vertical direction, for example: judging whether continuous pixel points in the vertical direction exist in the binarized image (namely, black pixel points cannot be spaced among the pixel points); if the continuous pixel points in the vertical direction exist in the binarized image, the continuous pixel points are thinned into center points of the continuous pixel points. The above-described thinning operation is performed on the binarized image, and the obtained thinned image is shown as a sub-graph (d) in fig. 3.
In a second embodiment, the lines in the binary image are refined in all directions using a refinement algorithm, which may be used include, but is not limited to: hilditch refinement algorithm, pavldis refinement algorithm, rosenfeld refinement algorithm, and the like.
After step S140, step S150 is performed: and (5) carrying out curve tracking on the thinned image to obtain a finger edge curve.
The above-mentioned implementation of step S150 is various, including but not limited to the following:
a first embodiment of curve tracking by performing checksum fitting on a plurality of finger edge points in a refined image may include:
step S151: a plurality of finger edge points in the refined image are located.
Optionally, in a specific implementation process, the illumination interference in the refined image may be removed first to obtain a refined image after the interference is removed, where the illumination interference may include: regular interference and irregular interference, wherein the regular interference refers to interference that acquired original images of the finger veins have the same characteristics and are shown in images, and the irregular interference refers to personalized interference of each finger vein image caused by different illumination, different movement of acquisition equipment, different finger placement positions and the like, so that the characteristics shown in each finger vein image are different from each other; finally, a plurality of finger edge points are positioned from the refined image after the interference is removed.
The above-described implementation of removing routine disturbances in refined images is for example: as shown in the refined image in the sub-graph (d) in fig. 3, there is a parabolic white light with fixed positions at the left and right ends of each finger vein image basically, and the parabolic white light intersects with the upper and lower edges of the finger respectively, which causes interference to the accurate positioning and curve tracking of the upper and lower edges of the finger. Therefore, foreground points generated by the two parabolic white lights at the left end and the right end can be removed, and the specific removing process is as follows: firstly, adopting a parabolic equation to fit the central lines of the left white light and the right white light, and obtaining the equation of the left light to be approximately x 1 =0.0023×(y-191) 2 +1.1587; the equation of the right ray is x 2 =-0.0021×(y-191) 2 +484, where x represents the abscissa of the pixel point in the image, x 1 And x 2 Respectively representing the abscissa of the pixel point on the central line of the left white light and the right white light, and y represents the ordinate of the pixel point in the image; after fitting the midlines of the left white light and the right white light, y is traversed (y is traversed in a range from 0 to the height h-1 pixel of the original finger vein image), and x is calculated for each y 1 And x 2 And are respectively at [ x ] 1 -10,x 1 +10],[x 2 -10,x 2 +10]The foreground points are removed in the range, and after the conventional interference in the thinned image is removed, a first interference removed image can be obtained as shown in (e) sub-graph in fig. 3.
The above-described embodiments for removing irregular disturbances in refined images are for example: as can be seen from the sub-graph (e) of fig. 3, the lower half of the image has a number of jagged foreground points that constitute a number of very short length lines that are irregular disturbances caused by the illumination of the external environment. Thus, irregular disturbances caused by the illumination of the external environment can be removed, the specific removal process being for example: and scanning an image area surrounded by the two parabolas once, and deleting lines with the length smaller than or equal to a preset threshold value, wherein the deletion refers to setting the lines to a background color (the background color is black for example), and the preset threshold value can be set according to specific situations, for example, 20, 30 or 50. After removing the irregular disturbances in the refined image, a second disturbance removed image may be obtained as shown in sub-graph (f) in fig. 3.
The embodiment of step S151 described above is, for example: the finger upper edge area of the finger vein image is positioned by using a positioning algorithm, and the finger lower edge area is positioned by using the positioning algorithm, so that a plurality of finger edge points in the refined image are positioned. The positioning algorithm specifically includes, for example: firstly, setting initial parameters to be used; secondly, determining starting points of a plurality of finger edge points by using initial parameters; then, circularly tracking the next finger edge point according to the starting point; and finally, taking the starting point and all finger edge points circularly tracked as a plurality of finger edge points in the thinned image.
Please refer to fig. 4, which is a schematic diagram illustrating a process of locating finger edge points and tracking curves according to an embodiment of the present application; the following is a detailed description of the process of locating finger edge points and curve tracking:
first, the initial parameters to be used are set, and it is understood that the initial parameters herein may be adjusted and modified according to the specific situation, and it is assumed that the initial parameters are given: finger vein image length denoted as M 1 =100, the preset zone length is denoted as M 2 =50, the number of points in the curve fitting process threshold is denoted as n=15, and the angle threshold in the curve fitting process is denoted asI.e. the angle threshold is 10 degrees (which can be adjusted as the case may be) for the vertical line of the scanThe initial position is denoted d=0, using C i =[]To represent a collection of finger edge points (which may also be understood as a column of points arranged by finger edge points in the image), where i may be 1 or 2, i.e. C may be used 1 Representing a set of finger edge points that have been located in the upper edge region using C 2 Representing the set of finger edge points that have been located in the lower edge region.
Secondly, determining starting points of a plurality of finger edge points by using initial parameters; specific examples are: scanning from left to right with a vertical line to determine the left starting points of the upper and lower edges; wherein the vertical line here can be denoted as l: x=d. It will be appreciated that the finger edge points of the upper edge may be located from the midpoint of the vertical line in the top direction, while the finger edge points of the lower edge may be located from the midpoint of the vertical line in the bottom direction, the first foreground point located in the midpoint to the top region may be the left starting point of the upper edge, and the left starting point of the upper edge defining a plurality of finger edge points may be denoted as C 1 =[Q 0 ]Correspondingly, the first foreground point found in the middle-to-bottom area can be used as the left starting point of the lower edge, and the left starting point of the lower edge for determining the plurality of finger edge points can be expressed as C 2 =[Q 0 ]. It will be appreciated that if d>M 1 If the left start point is not found, the program is exited, and the determination of the left start points of the plurality of finger edge points fails.
Then, the next finger edge point is circularly tracked according to the starting point, and as shown in FIG. 4, the finger edge point is collected C i N finger edge points are taken out from the tail end (i.e. last stored), the number of the N finger edge points can be set according to the specific situation, and the N finger edge points can be expressed as [ Q ] k ,Q k+1 ,……,Q k+N-1 ]If refer to the edge point set C i If the number of all finger edge points in the set is less than N, the finger edge points are collected into a set C i All finger edge points in (a) are fetched. Then, performing polynomial fitting on the N finger edge points once so as to obtain a fitting curve at Q k+N-1 Tangent line at l Q And will tangent line l Q Counterclockwise rotation ofThe theta degree obtains a straight line l Q+θ Then tangent line l Q Clockwise rotation by θ degrees to obtain straight line l Q-θ Can be in a straight line l Q-θ Straight line l Q+θ From straight line x=x Qk+N-1 +M 2 Finding a foreground point R in the enclosed area, if the line between the foreground point R and the foreground point Q is a straight line l Q If the included angle of the foreground point R is minimum, adding the foreground point R to the finger edge point set C i Is defined by a pair of the ends of the tube. If the foreground point R has exceeded the rightmost end of the finger vein image (which can be determined by comparing the abscissa of the foreground point R with the length of the finger vein image), the "all finger edge points tracked by the start point and loop are regarded as the plurality of finger edge points in the above-mentioned thinned image" is directly performed, otherwise, the "tracking of the next finger edge point by the start point loop" is continued. If no such foreground point R is found, and the distance between the vertical line at this time and the right boundary of the finger vein image is less than M 2 Then the method directly executes the process of taking all the finger edge points tracked by the starting point and the loops as a plurality of finger edge points in the thinned image, otherwise, the finger edge point set C i The last finger edge point at the end of (C) is deleted and the finger edge point set C in the sub-graph (f) in fig. 3 i The last end of (a) refers to the edge point being set to the background color (e.g., black).
Finally, taking the starting point and all finger edge points circularly tracked as a plurality of finger edge points in the thinned image; specific examples are: collecting the finger edge points C i All of the finger edge points in the above-mentioned thinned image are regarded as a plurality of finger edge points.
Step S152: and performing checksum fitting on the plurality of finger edge points to obtain a finger edge curve.
The embodiment of step S152 is, for example: performing checksum piecewise fitting on a plurality of finger edge points to obtain a finger edge curve, and drawing the finger edge curve in a finger vein image as shown in a (g) sub-graph in fig. 3; specific examples are: selecting C i Fitting operation is carried out on N points at the front end, so that C is filled i I=1, 2 points missing from the beginning, and C is selected i Fitting operation is carried out on N points at two sides of the break, thereby filling C i Points lacking in the middle part of i=1, 2 are then selected as C i Fitting the N points at the tail end to fill C i I=1, 2 points lacking at the end part, and finally scan C from left to right i I=1, 2; if the deviation between the finger edge point and the adjacent point of the finger edge point is larger than the preset threshold, the fitting operation is performed again according to the adjacent point of the finger edge point so as to recalculate the position coordinate of the finger edge point. It will be appreciated that each time a fitting operation is performed, C is i The N points at the tail end are subjected to fitting operation, the processing mode can be understood as sectional fitting, and errors in curve fitting can be effectively reduced by using sectional fitting, so that the accuracy of fitting a finger edge curve in a finger vein image with complex background is effectively improved; the final fit operation results in a finger edge curve as shown in the sub-graph (g) of fig. 3, where the finger edge curve may include an upper finger edge curve and a lower finger edge curve.
After step S150, step S160 is performed: and image segmentation is carried out on the finger vein image according to the finger edge curve, and an interested region of the finger vein image is obtained.
The embodiment of step S160 described above is, for example: image segmentation is performed on the finger vein image according to the upper finger edge curve and the lower finger edge curve, namely, the image part comprising finger veins is segmented, so that a region of interest of the finger vein image is obtained as shown in a (h) sub-graph in fig. 3.
In the implementation process, the edge lines in the background of the finger vein image are observed to be short in length, disordered in direction, unstable, irregular and the like, and the finger edge curves collected in the horizontal direction are observed to be long in length, approximately horizontal in direction and the like, so that the edge lines in the background of the finger vein image and the finger edge curves can be well distinguished by curve tracking. Therefore, the precision of obtaining the finger edge curve is effectively improved by carrying out convolution operation, binarization operation, thinning operation and curve tracking on the finger vein image, and then the finger vein image is subjected to image segmentation by using the finger edge curve with high precision, so that the precision of the region of interest for obtaining the finger vein image is improved.
Please refer to fig. 5, which illustrates a flow chart of finger vein authentication provided in an embodiment of the present application; optionally, after obtaining the region of interest, the obtained region of interest may also be used for identity authentication, and the embodiment may include:
step S210: and acquiring a finger vein image of the identity personnel to be authenticated.
Step S220: and carrying out convolution operation on the finger vein image to obtain a convolution result image.
Step S230: and performing binarization operation on the convolution result image to obtain a binarized image.
Step S240: and carrying out refinement operation on the binarized image to obtain a refined image.
Step S250: and (5) carrying out curve tracking on the thinned image to obtain a finger edge curve.
Step S260: and image segmentation is carried out on the finger vein image according to the finger edge curve, and an interested region of the finger vein image is obtained.
The implementation principle and implementation of the steps S210 to S260 are similar to those of the steps S110 to S160, and thus, the implementation principle and implementation thereof will not be described herein, and reference may be made to the descriptions of the steps S110 to S160, if not clear.
Step S270: image features in a region of interest are extracted.
The above-mentioned implementation of step S270 is various, including but not limited to the following:
in a first embodiment, image features in a region of interest are extracted using a machine learning algorithm, where the machine learning algorithm includes, but is not limited to: decision trees, bayesian learning, instance-based learning, genetic algorithms, rule learning, interpretation-based learning, and directional gradient histogram feature extraction algorithms, etc.
In a second embodiment, image features in a region of interest are extracted using a neural network model, where the neural network model includes, but is not limited to: single point multi-box detector (Feature Fusion Single Shot Multibox Detector, FSSD), leNet network, alexent network, google net network, VGG network, resnet network, wide Resnet network, and acceptance network, etc.
Step S280: and calculating the similarity between the image features and a plurality of template features in the feature template library to obtain a plurality of similarities.
The embodiment of step S280 is, for example: calculating the similarity between the image features and a plurality of template features in a feature template library to obtain a plurality of similarities; among the similarities that may be employed include, but are not limited to: cosine Distance, euclidean Distance (Euclidean Distance), hamming Distance (Hamming Distance), or entropy (Information Entropy), among others.
Step S290: and determining the identity information of the identity personnel to be authenticated according to the multiple similarities.
The above-mentioned implementation of step S290 is various, including but not limited to the following:
in a first embodiment, identity information corresponding to the maximum similarity among the multiple similarities is determined as identity information of an identity person to be authenticated.
In a second embodiment, a similarity threshold is determined from a plurality of similarities, and identity information is determined according to the similarity threshold; the similarity threshold value can be determined in various ways, and can be a constant threshold value which is set by oneself, a similarity can be selected between an average value and a maximum value of a plurality of similarity as the similarity threshold value, or a minimum value of the similarity can be selected from the similarity among a plurality of similar feature images, and a maximum value of the similarity can be selected from the similarity among a plurality of heterogeneous feature images; the average of the minimum similarity value and the maximum similarity value is determined as a similarity threshold.
Please refer to fig. 6, which illustrates a schematic structural diagram of an apparatus for extracting a region of interest according to an embodiment of the present application. The embodiment of the application provides a region of interest extraction device 300, which comprises:
The vein image acquisition module 310 is configured to acquire a finger vein image of an identity person to be authenticated.
The image convolution operation module 320 is configured to perform convolution operation on the finger vein image to obtain a convolution result image.
The binary image obtaining module 330 is configured to perform a binarization operation on the convolution result image to obtain a binarized image.
And the refined image obtaining module 340 is configured to perform a refinement operation on the binarized image to obtain a refined image.
The edge curve obtaining module 350 is configured to perform curve tracking on the thinned image to obtain a finger edge curve.
The region of interest obtaining module 360 is configured to obtain a region of interest of the finger vein image by performing image segmentation on the finger vein image according to the finger edge curve.
Optionally, in an embodiment of the present application, the edge curve obtaining module includes:
and the thinned image positioning module is used for positioning a plurality of finger edge points in the thinned image.
And the curve checking and fitting module is used for checking and fitting a plurality of finger edge points to obtain a finger edge curve.
Optionally, in an embodiment of the present application, the refinement image positioning module includes:
the illumination interference removing module is used for removing illumination interference in the refined image and obtaining the refined image after interference removal.
And the finger edge point positioning module is used for positioning a plurality of finger edge points from the refined image after the interference is removed.
Optionally, in an embodiment of the present application, the finger vein image includes: a finger upper edge region and a finger lower edge region; an image convolution operation module, comprising:
the upper edge convolution calculation module is used for carrying out convolution operation on the upper edge region of the finger by using the first convolution template to obtain an upper edge convolution characteristic diagram.
The lower edge convolution calculation module is used for carrying out convolution operation on the lower edge region of the finger by using the second convolution template to obtain a lower edge convolution characteristic diagram, and the first convolution template and the second convolution template are symmetrical in the vertical direction.
And the edge convolution merging module is used for merging the upper edge convolution characteristic image and the lower edge convolution characteristic image to obtain a convolution result image.
Optionally, in an embodiment of the present application, the refinement image obtaining module includes:
and the continuous pixel judging module is used for judging whether continuous pixel points in the vertical direction exist in the binarized image.
And the continuous pixel refinement module is used for refining the continuous pixel points into central points of the continuous pixel points if the continuous pixel points in the vertical direction exist in the binarized image.
Optionally, in an embodiment of the present application, the binary image obtaining module includes:
the pixel value judging module is used for judging whether each pixel value in the convolution result image is smaller than a preset threshold value or not.
And the pixel value setting module is used for setting the pixel value to zero if the pixel value is smaller than a preset threshold value, or setting the pixel value to one if the pixel value is smaller than the preset threshold value.
Optionally, in an embodiment of the present application, the region of interest extraction device further includes:
and the image feature extraction module is used for extracting image features in the region of interest.
The similarity calculation judging module is used for calculating the similarity between the image features and a plurality of template features in the feature template library to obtain a plurality of similarities.
And the identity information authentication module is used for determining the identity information of the identity personnel to be authenticated according to the multiple similarities.
It should be understood that, the apparatus corresponds to the above embodiment of the method for extracting a region of interest, and is capable of executing the steps involved in the above embodiment of the method, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device.
An electronic device provided in an embodiment of the present application includes: a processor and a memory storing machine-readable instructions executable by the processor, which when executed by the processor perform the method as above.
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, performs a method as above.
The storage medium may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
The foregoing description is merely an optional implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and the changes or substitutions should be covered in the scope of the embodiments of the present application.

Claims (9)

1. A region of interest extraction method, comprising:
acquiring a finger vein image of an identity person to be authenticated;
performing convolution operation on the finger vein image to obtain a convolution result image;
performing binarization operation on the convolution result image to obtain a binarized image;
carrying out refinement operation on the binarized image to obtain a refined image;
performing curve tracking on the refined image to obtain a finger edge curve;
image segmentation is carried out on the finger vein image according to the finger edge curve, and an interested region of the finger vein image is obtained;
the curve tracking of the thinned image to obtain a finger edge curve comprises the following steps: positioning the finger upper edge region of the finger vein image by using a positioning algorithm, and positioning the finger lower edge region of the finger vein image by using the positioning algorithm, so as to obtain a plurality of finger edge points positioned in the refined image; performing checksum piecewise fitting on the plurality of finger edge points to obtain the finger edge curve;
the positioning algorithm is used for positioning the finger upper edge area of the finger vein image, and then the positioning algorithm is used for positioning the finger lower edge area of the finger vein image, so that a plurality of finger edge points positioned in the refined image are obtained, and the method comprises the following steps: setting initial parameters, wherein the initial parameters comprise: the finger vein image is preset with a region length M and a curve fitting point number threshold value N; scanning from left to right by using a vertical line to determine a left starting point, and adding the left starting point into a finger edge point set; extracting N finger edge points [ Q ] from the end of the finger edge point set k ,Q k+1 ,……,Q k+N-1 ]If the number of the finger edge points in the finger edge point set is smaller than N, all the finger edge points in the finger edge point set are taken out; performing one-time polynomial fitting on the N finger edge points to obtain a fitting curve at a finger edge point Q k+N-1 Tangent line at l Q The method comprises the steps of carrying out a first treatment on the surface of the The tangent line l Q The θ degree is rotated anticlockwise to obtain a straight line l Q+θ The method comprises the steps of carrying out a first treatment on the surface of the The tangent line l Q Clockwise rotation by θ degrees to obtain straight line l Q-θ The method comprises the steps of carrying out a first treatment on the surface of the In the straight line l Q+θ Said straight line l Q-θ And a preset straight line x=x Qk+N-1 Finding a foreground point R in an area surrounded by +M, if the foreground point R and the finger edge point Q k+N-1 Is connected with the tangent line l Q If the included angle is the smallest, the foreground point R is taken as the next finger edge point added to the finger edge point set;
the performing checksum piecewise fitting on the plurality of finger edge points includes: sequentially selecting V finger edge points from the plurality of finger edge points, and performing fitting operation on the V finger edge points so as to fill missing points of the V finger edge points; if the deviation between any finger edge point of the plurality of finger edge points and the adjacent point of the finger edge point is larger than a preset threshold value, carrying out fitting operation again according to the adjacent point of the finger edge point.
2. The method of claim 1, wherein said locating a plurality of finger edge points in the refined image comprises:
removing illumination interference in the refined image to obtain a refined image after interference removal;
and positioning the plurality of finger edge points from the refined image after the interference is removed.
3. The method of claim 1, wherein the finger vein image comprises: a finger upper edge region and a finger lower edge region; the step of carrying out convolution operation on the finger vein image to obtain a convolution result image comprises the following steps:
performing convolution operation on the upper edge region of the finger by using a first convolution template to obtain an upper edge convolution feature map;
performing convolution operation on the lower edge region of the finger by using a second convolution template to obtain a lower edge convolution feature map, wherein the first convolution template and the second convolution template are symmetrical in the vertical direction;
and combining the upper edge convolution feature map and the lower edge convolution feature map to obtain a convolution result image.
4. A method according to claim 3, wherein said refining the binarized image comprises:
Judging whether continuous pixel points in the vertical direction exist in the binarized image or not;
if yes, the continuous pixel points are thinned to be central points of the continuous pixel points.
5. The method of claim 1, wherein said binarizing the convolved result image comprises:
judging whether each pixel point value in the convolution result image is smaller than a preset threshold value or not;
if yes, the pixel value is set to zero, otherwise, the pixel value is set to one.
6. The method of any one of claims 1-5, further comprising, after said obtaining the region of interest of the finger vein image:
extracting image features in the region of interest;
calculating the similarity between the image features and a plurality of template features in a feature template library to obtain a plurality of similarities;
and determining the identity information of the identity personnel to be authenticated according to the plurality of similarities.
7. A region of interest extraction apparatus, comprising:
the vein image acquisition module is used for acquiring a finger vein image of the identity personnel to be authenticated;
the image convolution operation module is used for carrying out convolution operation on the finger vein image to obtain a convolution result image;
The binary image obtaining module is used for carrying out binarization operation on the convolution result image to obtain a binarization image;
the refined image obtaining module is used for carrying out refining operation on the binarized image to obtain a refined image;
the edge curve obtaining module is used for carrying out curve tracking on the thinned image to obtain a finger edge curve;
the interest region obtaining module is used for carrying out image segmentation on the finger vein image according to the finger edge curve to obtain an interest region of the finger vein image;
the curve tracking of the thinned image to obtain a finger edge curve comprises the following steps: positioning the finger upper edge region of the finger vein image by using a positioning algorithm, and positioning the finger lower edge region of the finger vein image by using the positioning algorithm, so as to obtain a plurality of finger edge points positioned in the refined image; performing checksum piecewise fitting on the plurality of finger edge points to obtain the finger edge curve;
the positioning algorithm is used for positioning the finger upper edge area of the finger vein image, and then the positioning algorithm is used for positioning the finger lower edge area of the finger vein image, so that a plurality of finger edge points positioned in the refined image are obtained, and the method comprises the following steps: setting initial parameters, wherein the initial parameters comprise: the finger vein image is preset with a region length M and a curve fitting point number threshold value N;
Scanning from left to right by using a vertical line to determine a left starting point, and adding the left starting point into a finger edge point set; extracting N finger edge points [ Q ] from the end of the finger edge point set k ,Q k+1 ,……,Q k+N-1 ]If the number of the finger edge points in the finger edge point set is smaller than N, all the finger edge points in the finger edge point set are taken out; performing one-time polynomial fitting on the N finger edge points to obtain a fitting curve at a finger edge point Q k+N-1 Tangent line at l Q The method comprises the steps of carrying out a first treatment on the surface of the The tangent line l Q The θ degree is rotated anticlockwise to obtain a straight line l Q+θ The method comprises the steps of carrying out a first treatment on the surface of the The tangent line l Q Clockwise rotation by θ degrees to obtain straight line l Q-θ The method comprises the steps of carrying out a first treatment on the surface of the In the straight line l Q+θ Said straight line l Q-θ And a preset straight line x=x Qk+N-1 Finding a foreground point R in an area surrounded by +M, if the foreground point R and the finger edge point Q k+N-1 Is connected with the tangent line l Q If the included angle is the smallest, the foreground point R is taken as the next finger edge point added to the finger edge point set;
the performing checksum piecewise fitting on the plurality of finger edge points includes: sequentially selecting V finger edge points from the plurality of finger edge points, and performing fitting operation on the V finger edge points so as to fill missing points of the V finger edge points; if the deviation between any finger edge point of the plurality of finger edge points and the adjacent point of the finger edge point is larger than a preset threshold value, carrying out fitting operation again according to the adjacent point of the finger edge point.
8. An electronic device, comprising: a processor and a memory storing machine-readable instructions executable by the processor to perform the method of any one of claims 1 to 6 when executed by the processor.
9. A storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of claims 1 to 6.
CN202110107665.2A 2021-01-26 2021-01-26 Region of interest extraction method and device, electronic equipment and storage medium Active CN112784837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110107665.2A CN112784837B (en) 2021-01-26 2021-01-26 Region of interest extraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110107665.2A CN112784837B (en) 2021-01-26 2021-01-26 Region of interest extraction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112784837A CN112784837A (en) 2021-05-11
CN112784837B true CN112784837B (en) 2024-01-30

Family

ID=75757968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110107665.2A Active CN112784837B (en) 2021-01-26 2021-01-26 Region of interest extraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112784837B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0545135A (en) * 1990-12-26 1993-02-23 Ind Technol Res Inst Method and device for visually measuring precise contour
CN101261685A (en) * 2008-01-31 2008-09-10 浙江工业大学 Automatic input device for cloth sample image based on image vector technology
CN102945506A (en) * 2012-08-28 2013-02-27 同济大学 Method for processing boundary contour image information for micro sitting of wind farm
CN104700420A (en) * 2015-03-26 2015-06-10 爱威科技股份有限公司 Ellipse detection method and system based on Hough conversion and ovum identification method
CN105373781A (en) * 2015-11-16 2016-03-02 成都四象联创科技有限公司 Binary image processing method for identity authentication
CN105518716A (en) * 2015-10-10 2016-04-20 厦门中控生物识别信息技术有限公司 Finger vein recognition method and apparatus
CN107563293A (en) * 2017-08-03 2018-01-09 广州智慧城市发展研究院 A kind of new finger vena preprocess method and system
CN108682028A (en) * 2018-05-16 2018-10-19 陈年康 Laser point cloud based on radiation correcting and optical image automatic matching method
CN109272521A (en) * 2018-10-11 2019-01-25 北京理工大学 A kind of characteristics of image fast partition method based on curvature analysis
CN109815869A (en) * 2019-01-16 2019-05-28 浙江理工大学 A kind of finger vein identification method based on the full convolutional network of FCN
CN110188778A (en) * 2019-05-31 2019-08-30 中国人民解放军61540部队 Residential block element profile rule method based on Extraction of Image result
CN110705342A (en) * 2019-08-20 2020-01-17 上海阅面网络科技有限公司 Lane line segmentation detection method and device
CN110765856A (en) * 2019-09-12 2020-02-07 南京邮电大学 Convolution-based low-quality finger vein image edge detection algorithm
CN111914755A (en) * 2020-08-03 2020-11-10 河南大学 Eight-direction gradient-solving fingerprint identification model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474234B (en) * 2015-11-24 2019-03-29 厦门中控智慧信息技术有限公司 A kind of vena metacarpea knows method for distinguishing and vena metacarpea identification device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0545135A (en) * 1990-12-26 1993-02-23 Ind Technol Res Inst Method and device for visually measuring precise contour
CN101261685A (en) * 2008-01-31 2008-09-10 浙江工业大学 Automatic input device for cloth sample image based on image vector technology
CN102945506A (en) * 2012-08-28 2013-02-27 同济大学 Method for processing boundary contour image information for micro sitting of wind farm
CN104700420A (en) * 2015-03-26 2015-06-10 爱威科技股份有限公司 Ellipse detection method and system based on Hough conversion and ovum identification method
CN105518716A (en) * 2015-10-10 2016-04-20 厦门中控生物识别信息技术有限公司 Finger vein recognition method and apparatus
CN105373781A (en) * 2015-11-16 2016-03-02 成都四象联创科技有限公司 Binary image processing method for identity authentication
CN107563293A (en) * 2017-08-03 2018-01-09 广州智慧城市发展研究院 A kind of new finger vena preprocess method and system
CN108682028A (en) * 2018-05-16 2018-10-19 陈年康 Laser point cloud based on radiation correcting and optical image automatic matching method
CN109272521A (en) * 2018-10-11 2019-01-25 北京理工大学 A kind of characteristics of image fast partition method based on curvature analysis
CN109815869A (en) * 2019-01-16 2019-05-28 浙江理工大学 A kind of finger vein identification method based on the full convolutional network of FCN
CN110188778A (en) * 2019-05-31 2019-08-30 中国人民解放军61540部队 Residential block element profile rule method based on Extraction of Image result
CN110705342A (en) * 2019-08-20 2020-01-17 上海阅面网络科技有限公司 Lane line segmentation detection method and device
CN110765856A (en) * 2019-09-12 2020-02-07 南京邮电大学 Convolution-based low-quality finger vein image edge detection algorithm
CN111914755A (en) * 2020-08-03 2020-11-10 河南大学 Eight-direction gradient-solving fingerprint identification model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-Vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor;Wan Kim等;《sensors》;第18卷;1-34 *
基于小波双立方配比插值的Bessel拟合图像复原算法;史永胜等;《科学技术与工程》;第20卷(第23期);9472-9477 *
基于立体视觉的深度信息恢复技术研究;唐志健;《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》(第08期);正文第26-34页 *
指静脉识别算法及其密码应用;俞云;《中国优秀硕士学位论文全文数据库 信息科技辑》(第02期);正文第7-10、31-32页 *

Also Published As

Publication number Publication date
CN112784837A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
US10552705B2 (en) Character segmentation method, apparatus and electronic device
US9171204B2 (en) Method of perspective correction for devanagari text
WO2019169532A1 (en) License plate recognition method and cloud system
US9076056B2 (en) Text detection in natural images
CN110852349A (en) Image processing method, detection method, related equipment and storage medium
CN110647882A (en) Image correction method, device, equipment and storage medium
CN112926531A (en) Feature information extraction method, model training method and device and electronic equipment
CN113011426A (en) Method and device for identifying certificate
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
CN108960247B (en) Image significance detection method and device and electronic equipment
CN111898408B (en) Quick face recognition method and device
CN112784837B (en) Region of interest extraction method and device, electronic equipment and storage medium
US8891822B2 (en) System and method for script and orientation detection of images using artificial neural networks
CN113378847B (en) Character segmentation method, system, computer device and storage medium
CN112906495B (en) Target detection method and device, electronic equipment and storage medium
CN114529570A (en) Image segmentation method, image identification method, user certificate subsidizing method and system
CN111767751B (en) Two-dimensional code image recognition method and device
CN112711748B (en) Finger vein identity authentication method and device, electronic equipment and storage medium
CN113591066A (en) Equipment identity identification method and device
CN112184776A (en) Target tracking method, device and storage medium
CN113298079A (en) Image processing method and device, electronic equipment and storage medium
US8903175B2 (en) System and method for script and orientation detection of images
KR101437286B1 (en) Method and apparatus for identifying digital contents
JP6138038B2 (en) Form identification device and form identification method
CN117197422B (en) Identification code positioning method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230930

Address after: 528400, Xueyuan Road, 1, Shiqi District, Guangdong, Zhongshan

Applicant after: University OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINA, ZHONGSHAN INSTITUTE

Applicant after: Productivity Promotion Center of Xiaolan Town, Zhongshan City

Address before: 528400, Xueyuan Road, 1, Shiqi District, Guangdong, Zhongshan

Applicant before: University OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINA, ZHONGSHAN INSTITUTE

GR01 Patent grant
GR01 Patent grant