CN111950406A - Finger vein identification method, device and storage medium - Google Patents
Finger vein identification method, device and storage medium Download PDFInfo
- Publication number
- CN111950406A CN111950406A CN202010749346.7A CN202010749346A CN111950406A CN 111950406 A CN111950406 A CN 111950406A CN 202010749346 A CN202010749346 A CN 202010749346A CN 111950406 A CN111950406 A CN 111950406A
- Authority
- CN
- China
- Prior art keywords
- finger vein
- graph
- image
- nodes
- node set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000003462 vein Anatomy 0.000 title claims abstract description 358
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 58
- 230000011218 segmentation Effects 0.000 claims abstract description 53
- 238000013528 artificial neural network Methods 0.000 claims abstract description 43
- 238000007781 pre-processing Methods 0.000 claims abstract description 28
- 238000011176 pooling Methods 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 15
- 238000003706 image smoothing Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 239000010410 layer Substances 0.000 description 61
- 239000011159 matrix material Substances 0.000 description 30
- 239000013598 vector Substances 0.000 description 26
- 238000004364 calculation method Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 238000012549 training Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000000354 decomposition reaction Methods 0.000 description 6
- 238000003709 image segmentation Methods 0.000 description 6
- 230000008447 perception Effects 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000009499 grossing Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 239000002356 single layer Substances 0.000 description 4
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000002615 epidermis Anatomy 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000003064 k means clustering Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000009022 nonlinear effect Effects 0.000 description 2
- 206010033675 panniculitis Diseases 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 210000004304 subcutaneous tissue Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/431—Frequency domain transformation; Autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a finger vein identification method, a finger vein identification device and a storage medium. The finger vein identification method comprises the following steps: preprocessing the collected original finger vein image to obtain a preprocessed finger vein image; acquiring a node set from the preprocessed finger vein image according to a SLIC superpixel segmentation algorithm, and constructing a finger vein weighted graph by using the node set; and performing finger vein recognition on the finger vein weighted graph based on the improved graph convolution neural network to obtain a recognition result. The method can comprehensively consider the stability of the graph structure and the randomness of the finger vein image to construct the graph model of the finger vein image, and improve the finger vein identification precision.
Description
Technical Field
The invention relates to the technical field of biological identification, in particular to a finger vein identification method, a finger vein identification device and a storage medium.
Background
The finger vein recognition technology is a new biological feature recognition technology and is widely applied to identity authentication, and the working principle of the technology is that the biological features of finger veins are extracted from finger vein images, and the finger vein feature information is compared with the pre-registered finger vein features, so that the identity authentication is completed. In order to enable the finger vein image to better express the characteristic information of the finger vein, the finger vein identification method proposed in recent years mainly represents the finger vein image as a graph model, namely, a minutiae point is directly selected from the finger vein image to be used as a graph node, or a node set is obtained by uniformly dividing a graph block to construct the graph model. Actually, the minutiae of the finger veins are unstable, so that the structural difference of finger vein images among classes is large when the minutiae are directly used as image nodes, false minutiae may be generated when the finger vein images are subjected to image processing such as image enhancement and the like, the finger vein identification precision is affected, and the randomness of biological tissues is ignored when the node set is directly obtained by using the uniform block division method, and the finger vein identification precision is also affected. Therefore, the existing finger vein recognition method based on the graph model cannot comprehensively consider the stability of the graph structure and the randomness of the finger vein image, and the finger vein recognition accuracy is low.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a finger vein recognition method, a device and a storage medium, which can comprehensively consider the stability of a graph structure and the randomness of a finger vein image to construct a graph model of the finger vein image and improve the finger vein recognition precision.
In order to solve the above technical problem, in a first aspect, an embodiment of the present invention provides a finger vein recognition method, including:
preprocessing the collected original finger vein image to obtain a preprocessed finger vein image;
acquiring a node set from the preprocessed finger vein image according to a SLIC superpixel segmentation algorithm, and constructing a finger vein weighted graph by using the node set;
and performing finger vein recognition on the finger vein weighted graph based on the improved graph convolution neural network to obtain a recognition result.
Further, the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, and specifically, the preprocessing includes:
extracting an interested region from the original finger vein image to obtain an interested region image;
and carrying out image smoothing processing on the image of the region of interest to obtain the preprocessed finger vein image.
Further, the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, and specifically, the preprocessing includes:
extracting an interested region from the original finger vein image to obtain an interested region image;
and carrying out image smoothing processing and image enhancement processing on the image of the region of interest to obtain the preprocessed finger vein image.
Further, the obtaining a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and constructing a finger vein weighted graph by using the node set specifically include:
segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing the node set by taking the obtained superpixel blocks as graph nodes; wherein the characteristics of the nodes in the node set are the gray characteristics and the spatial characteristics corresponding to the superpixel blocks;
and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the spatial intimacy between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
Further, the obtaining a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and constructing a finger vein weighted graph by using the node set specifically include:
segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing the node set by taking the obtained superpixel blocks as graph nodes; wherein the nodes in the node set are characterized by the directional energy distribution corresponding to the superpixel graph blocks after the nodes are rectangular;
and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the feature similarity between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
Further, the finger vein recognition is performed on the finger vein weighted graph based on the improved graph convolution neural network to obtain a recognition result, specifically:
and inputting the finger vein weighted graph into the improved graph convolution neural network, and comparing the similarity of the characteristics of all nodes in different finger vein weighted graphs through the improved graph convolution neural network to obtain the identification result.
Further, the improved atlas neural network includes an atlas layer, a pooling layer, and a readout layer.
Further, the map convolutional layer includes a map convolutional layer in which a convolution kernel is defined by a chebyshev polynomial.
In a second aspect, an embodiment of the present invention provides a finger vein recognition apparatus, including:
the finger vein image preprocessing module is used for preprocessing the acquired original finger vein image to obtain a preprocessed finger vein image;
the finger vein weighted graph construction module is used for acquiring a node set from the preprocessed finger vein image according to a SLIC super-pixel segmentation algorithm and constructing a finger vein weighted graph by using the node set;
and the finger vein recognition module is used for carrying out finger vein recognition on the finger vein weighted graph based on the improved convolutional neural network to obtain a recognition result.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, which includes a stored computer program, where the computer program, when executed, controls a device in which the computer-readable storage medium is located to execute the finger vein recognition method as described above.
The embodiment of the invention has the following beneficial effects:
according to the SLIC superpixel segmentation algorithm, a node set is obtained from the preprocessed original finger vein image, namely the preprocessed finger vein image, and a finger vein weighted graph is constructed by using the node set, so that the finger vein weighted graph is subjected to finger vein recognition based on the improved graph convolution neural network to obtain a recognition result, and the finger vein recognition is realized. Compared with the prior art, the finger vein image segmentation preprocessing method and the finger vein image segmentation preprocessing device have the advantages that the preprocessed finger vein image is segmented according to the SLIC superpixel segmentation algorithm, the obtained superpixel blocks are used as the image nodes to construct the node set, the finger vein weighted graph is further constructed, the characteristics that the superpixel blocks retain effective image information and contain more perception information can be utilized, the stability of the graph structure and the randomness of the finger vein image are comprehensively considered to construct the graph model of the finger vein image, the finger vein identification precision is improved, meanwhile, finger vein identification is conducted on the finger vein weighted graph based on the improved graph convolution neural network, the depth features of the finger vein weighted graph can be extracted through the improved graph convolution neural network, and the finger vein identification precision is further improved.
Drawings
Fig. 1 is a schematic flow chart of a finger vein recognition method according to a first embodiment of the present invention;
FIG. 2 is an image of a region of interest in a preferred embodiment of the first embodiment of the present invention;
FIG. 3 is a Gaussian filtered image of a preferred embodiment of the first embodiment of the present invention;
FIG. 4 is a flowchart illustrating a SLIC superpixel segmentation algorithm according to a first embodiment of the present invention;
FIG. 5 is a flow chart of constructing a finger vein weighting chart according to a preferred embodiment of the first embodiment of the present invention;
FIG. 6 is a diagram illustrating the effect of enhancing an image by using a Gabor filter based on Weber's theorem according to another preferred embodiment of the first embodiment of the present invention;
FIG. 7 is a schematic diagram of the directional power distribution of a rectangular block of super pixels according to another preferred embodiment of the first embodiment of the present invention;
FIG. 8 is a flow chart of constructing a finger vein weighting chart according to another preferred embodiment of the first embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating a process of sequentially aggregating self, first-order, and second-order neighborhood features by performing graph convolution operations using a Chebyshev filter according to the first embodiment of the present invention;
FIG. 10 is a diagram of a modified pooling layer configuration in a first embodiment of the present invention;
FIG. 11 is a schematic structural diagram of an improved convolutional neural network in a first embodiment of the present invention;
fig. 12 is a schematic structural diagram of a finger vein recognition apparatus according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, the step numbers in the text are only for convenience of explanation of the specific embodiments, and do not serve to limit the execution sequence of the steps.
The first embodiment:
as shown in fig. 1, the first embodiment provides a finger vein recognition method including steps S1 to S3:
and S1, preprocessing the collected original finger vein image to obtain a preprocessed finger vein image.
And S2, acquiring a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and constructing a finger vein weighted graph by using the node set.
And S3, performing finger vein recognition on the finger vein weighted graph based on the improved graph convolution neural network to obtain a recognition result.
As an example, in step S1, in consideration of noise such as an area with too large brightness variation or a blurred texture detail area existing in the acquired original finger vein image, the original finger vein image is preprocessed to obtain a preprocessed finger vein image, so that a node set can be subsequently obtained from the preprocessed finger vein image with better image quality according to the SLIC superpixel segmentation algorithm.
In step S2, when the preprocessed finger vein image is obtained, the preprocessed finger vein image is segmented according to the SLIC superpixel segmentation algorithm, an obtained superpixel block is used as a graph node to construct a node set, and a finger vein weighted graph is constructed by using the node set.
The SLIC (simple linear iterative clustering) superpixel segmentation algorithm is to convert an image into a 5-dimensional vector under CIELAB color space and XY coordinates, construct a distance metric standard for the 5-dimensional vector, perform local clustering on image pixels to generate a superpixel block, and realize image segmentation. The SLIC superpixel segmentation algorithm can generate compact and approximately uniform superpixel blocks, and has high comprehensive evaluation in the aspects of operation speed, object contour maintenance and superpixel shape. The super-pixel block is a region formed by a series of pixel points which are adjacent in position and similar in color, gray scale, texture and other characteristics, so that effective information of an original image is reserved, the finger vein weighted graph constructed based on the super-pixel block is stable in structure and high in calculation efficiency, more perception information is contained, and a good effect is shown on expressing the randomness of the finger vein image.
In step S3, when the finger vein weighted graph is obtained, the finger vein weighted graph is input into the improved graph-convolution neural network, the depth features in the finger vein weighted graph are extracted through the improved graph-convolution neural network, and feature comparison is performed on different finger vein weighted graphs to obtain a recognition result, so as to implement finger vein recognition.
In the embodiment, a node set is obtained from a preprocessed original finger vein image, namely a preprocessed finger vein image, according to an SLIC superpixel segmentation algorithm, and a finger vein weighted graph is constructed by using the node set, so that the finger vein weighted graph is subjected to finger vein recognition based on an improved graph convolution neural network to obtain a recognition result, and the finger vein recognition is realized. The finger vein weighted graph is constructed by segmenting and preprocessing a finger vein image according to an SLIC superpixel segmentation algorithm and constructing a node set by taking the obtained superpixel blocks as graph nodes, and then constructing a finger vein weighted graph, the characteristics that the superpixel blocks retain effective image information and contain more sensing information can be utilized, the stability of a graph structure and the randomness of the finger vein image are comprehensively considered to construct a graph model of the finger vein image, the finger vein identification precision is improved, meanwhile, the finger vein identification is carried out on the finger vein weighted graph based on the improved graph convolution neural network, the depth characteristic of the finger vein weighted graph can be extracted by utilizing the improved graph convolution neural network, and the finger vein identification precision is further improved.
In a preferred embodiment, the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, and specifically, the preprocessing includes: extracting an interested region from the original finger vein image to obtain an interested region image; and carrying out image smoothing on the image of the region of interest to obtain a preprocessed finger vein image.
In this embodiment, a region of interest is extracted from each acquired original finger vein image, a region of interest (ROI) image is obtained, the sizes of all the regions of interest images are normalized, for example, normalized to 91 × 200, class labels are assigned to all the regions of interest images, the same class label is assigned to the regions of interest images of the same finger from the same person, in addition, in consideration of an area with an excessively large brightness change in the original finger vein image, image smoothing processing is performed on the regions of interest images, for example, smoothing is performed on the regions of interest images by using a gaussian filter with a standard deviation of 0.8, and a preprocessed finger vein image is obtained. The region of interest image is shown in fig. 2 and the gaussian filtered image is shown in fig. 3.
In a preferred embodiment of this embodiment, the obtaining a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and constructing the finger vein weighted graph by using the node set specifically includes: segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing a node set by taking the obtained superpixel blocks as graph nodes; wherein, the characteristics of the nodes in the node set are the gray characteristics and the spatial characteristics of the corresponding superpixel blocks; and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the spatial intimacy between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
A weighted graph of N nodes may be represented as G ═ (V, E, a), V ═ V1,v2,…,vNIs a set of nodes, E ═ EijIs the set of edges, where eij=(vi,vj)(i≠j),A∈RN×NIs a weighted adjacency matrix of the graph, wije.A represents an edge eijThe above weight values. Defining the signal on the graph as X, and using X to be R for the node characteristic matrix of the graphN×CDenotes, defines Xi∈RCAnd C is the dimension of the node feature. The operation of constructing the finger vein set weighted graph comprises the steps of constructing a node set, calculating node characteristics, constructing an edge set and calculating a weight.
Fig. 4 is a flow chart diagram of the SLIC superpixel segmentation algorithm. The SLIC superpixel segmentation algorithm generates superpixels based on improved K mean value clustering, a preprocessed finger vein image is input, a segmentation number K is given, the algorithm firstly converts the image into a CIELAB color space, the brightness L, the color value (a, b) and the space coordinate (x, y) of each pixel form a five-dimensional vector (L, a, b, x, y), and K initial clustering centers C are arranged in the imagei=[li,ai,bi,xi,yi]TThen in the intervalAnd (3) sampling on grids of pixels, wherein N is the total number of pixels, moving the clustering center to the position with the minimum gradient in a 3 x 3 neighborhood in order to avoid the clustering center falling on the contour boundary with larger gradient, and distributing a label for each pixel point in the neighborhood searched by the clustering center. Different from the search of the standard K-means clustering in the whole image, the search range of the SLIC superpixel segmentation algorithm is limited to 2S multiplied by 2S, the distance calculation is reduced, and therefore the convergence of the algorithm can be accelerated. Calculating the color distance d between each searched pixel point and the clustering centercAnd a spatial distance dsAnd normalizing to obtain a final distance measurement D', and calculating formulas shown as formula (1), formula (2) and formula (3):
in the formulae (1) to (3), NSS is the maximum space distance in the class, NCFor maximum colour distance, typically in the range of [1, 40 ]]The relative importance between color similarity and spatial proximity is weighed by taking a fixed constant value m within the interval. And continuously iterating the steps until the error is converged, and finally, using a connected component algorithm to reassign discontinuous super-pixels with undersize to adjacent super-pixel regions. And after the finger vein superpixel graph is obtained through the SLIC superpixel segmentation algorithm, a superpixel block forms a graph node to obtain a node set. Wherein the characteristics of the nodes in the node set are the gray characteristics and the spatial characteristics of the corresponding superpixel blocks. Because the image acquisition light source is constant, the thickness of the epidermis and the subcutaneous tissue of different finger individuals is different, and the gray characteristic in finger vein imaging is correspondedIn contrast, the gray scales of vein and non-vein partial regions in the same finger vein image are different, and each superpixel block has a spatial position characteristic, so that the pixel intensity value of each superpixel block is calculatedAnd the centroid coordinate C ═ xi,yi)/SmaxGray and spatial features to nodes, where NiRepresenting the total number of pixels, p, within the ith super pixelnFor each pixel value, SmaxIs the maximum value of the image size. Because the two characteristic scales are different, the z-score (zero-mean) standardization processing is carried out to obtain the node characteristic X ═ M of the final finger vein weighted graphsp,Csp]∈RN×3。
The edges of the graph describe the connection relationship between the nodes, and the weights of the edges describe the strength of the connection relationship between the nodes. Typical edge set representation constructed graphs include region adjacency graphs, K-neighbor graphs, triangular subdivision graphs and the like, which are all graph structures connected locally, the constructed graphs are highly sparse, and the graph convolution extracts little perception information. Therefore, the nodes in the node set are connected according to the graph structure of the undirected complete graph, and only one edge exists between each pair of different nodes, so that the weight w of the edgeij=wjiAnd constructing an edge set, and taking the space intimacy between corresponding nodes as the weight of the edge. The spatial intimacy between the corresponding nodes is obtained by calculating the spatial distance of the super-pixel centroid coordinates, the smaller the value of the spatial intimacy is, the larger the distance between the two super-pixel centroids is, the smaller the spatial intimacy between the corresponding nodes is, and the weaker the connection relationship is. The weight calculation formula is shown in formula (4):
in the formula (4), (x)i,yi) And (x)j,yj) Is the centroid coordinate of two super pixel blocks, and sigma is the value [0,1]The embodiment of the present invention, taking 0.1 × pi can obtain a better weight distribution, thereby obtaining the handFinger vein weighted graph. A schematic flow chart for constructing a finger vein weighting chart is shown in fig. 5.
In another preferred embodiment, the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, specifically: extracting an interested region from the original finger vein image to obtain an interested region image; and carrying out image smoothing processing and image enhancement processing on the image of the region of interest to obtain a preprocessed finger vein image.
In this embodiment, a region of interest is extracted from each acquired original finger vein image, a region of interest (ROI) image is obtained, the sizes of all the region of interest images are normalized, for example, normalized to 91 × 200, class labels are allocated to all the region of interest images, the region of interest images of the same finger from the same person are allocated to the same class labels, in addition, in consideration of an area with too large brightness change and a texture detail blurred area in the original finger vein image, image smoothing and image enhancement processing are performed on the region of interest images, for example, a gaussian filter with a standard deviation of 0.8 is used for smoothing the images, and a Gabor filter based on weber's theorem is used for enhancing the images, so as to obtain a preprocessed finger vein image. The effect of enhancing an image with a Gabor filter based on weber's theorem is shown in fig. 6.
The original finger vein image has a texture detail fuzzy area mainly because the texture detail fuzzy area is influenced by illumination in the acquisition process, a Gabor filter based on the Weber theorem is adopted to enhance the image, Gabor wavelets and the Weber principle are combined, the direction represented by the maximum difference of the Weber local descriptor is taken as the main direction of a modal image, and an image enhancement strategy is established through the Gabor filter, so that the influence of illumination change on the image quality can be weakened, the finger vein texture detail is clearer, and the image expression effect is enhanced.
In a preferred embodiment of this embodiment, the obtaining a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and constructing the finger vein weighted graph by using the node set specifically includes: segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing a node set by taking the obtained superpixel blocks as graph nodes; wherein, the node concentration node is characterized by the directional energy distribution of the corresponding superpixel graph blocks after the rectangularization; and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the feature similarity between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
Since image segmentation on the smoothed and enhanced image shows different randomness than that on the smoothed image, the finger vein weighted graph constructed based on the SLIC superpixel segmentation algorithm is also different. Firstly, reconstructing a superpixel block into a rectangle according to the length and width maximum edge values, wherein an overlapped part exists between adjacent reconstructed rectangular superpixel blocks, the relevance between the blocks can be increased, and then mapping the reconstructed superpixel rectangular block to a refined preprocessed finger vein image, and extracting the OED (Oriented Energy Distribution) of the block as the characteristics of a node by using a direction adjustable Filter (Steerable Filter). As shown in fig. 7, it can be seen from the super-pixel division result that the super-pixel regions represented by the nodes are different in shape, and ideally, regardless of the region size, the Steerable filter can generate an energy distribution of 0 ° to 360 ° in response, and can effectively describe the direction randomness of the graph block, and general formulas of the Steerable filter are shown in formulas (5) to (7):
χ={E(1),E(2),…,E(θ),…,E(360)|I} (7)
in the equations (5) to (7), f (x, y) is a base filter group composed of trigonometric functions, phijFor the filter direction, N is the number of f (x, y), k (θ) is the interpolation function, and θ is the direction of the Steerable filter. Equation (6) is a directional filter energy meterFormula of calculation, i.e. image I and filter hθThe energy in a certain direction θ, X, Y, is the size of the filter and the image I, I being the reconstructed super-pixel tile in this embodiment. And (7) forming the directional energy of different theta into a vector, thereby obtaining the directional energy distribution characteristic of the finger vein superpixel rectangular block, and finally obtaining the node characteristic of the finger vein weighted graph as X ═ χsp]∈RN×360。
Connecting nodes in the node set according to a graph structure of the undirected complete graph, wherein only one edge exists between each pair of different nodes, so that the weight w of the edgeij=wjiAnd constructing an edge set, and taking the feature similarity between corresponding nodes as the weight of the edge. The relevance of the adjacent superpixel blocks is high, the feature similarity is high, and the edge weight between the corresponding nodes is large; on the contrary, the correlation degree of the adjacent and distant superpixel blocks is lower, the edge weight between the corresponding nodes is smaller, and the weight calculation formula is shown as the formula (8):
in the formula (8), fi、fjAre the direction feature vectors of nodes i and j, respectively, and L is fi、fjLength of (d). A schematic flow chart for constructing the finger vein weighted graph is shown in fig. 8.
In a preferred embodiment, the finger vein weighted graph is subjected to finger vein recognition based on the improved graph convolution neural network, and a recognition result is obtained, specifically: and inputting the finger vein weighted graph into the improved graph convolution neural network, and comparing the similarity of the characteristics of all nodes in different finger vein weighted graphs through the improved graph convolution neural network to obtain an identification result.
The improved graph convolution neural network comprises a graph convolution layer, a pooling layer and a readout layer.
In a preferred embodiment of this embodiment, the map convolutional layer includes a map convolutional layer in which a convolutional kernel is defined by a chebyshev polynomial.
For the graph convolution layer, considering that the constructed finger vein weighted graph is a non-sparse graph, compared with low-order neighborhood information aggregation and layer number stacking of space domain graph convolution, the convolution kernel receptive field of a spectrogram can be expanded, and higher-order information in the graph can be extracted. The process of convolution based on the frequency domain graph is as follows: firstly, converting a graph signal into a Fourier domain according to graph Fourier transform, wherein the definition of the graph Fourier transform depends on eigenvectors and eigenvalues after graph Laplacian matrix decomposition, then performing multiplication operation on the signal in a spectrum space according to convolution theorem, and converting the signal into an original space definition convolution operator by utilizing inverse Fourier transform. For an N-node graph G ═ (V, E, a), the normalized laplacian matrix is defined as L ═ IN-D-1/2AD-1/2Wherein D ∈ RN×NIs a degree matrix, Dii=∑jAij,INIs an identity matrix. The Laplace matrix is a symmetric matrix, and characteristic decomposition is carried out on the Laplace matrix, wherein L is equal to Ulambda UTObtaining a set of orthogonal eigenvectors U ═ U0,…,uN-1]∈RN×NThe corresponding characteristic value is Λ ═ λ0,…,λN-1]∈RN×N. The fourier transform and inverse transform for the graph signal x are defined as: with the fourier transform definition of the graph, the convolution operation on the graph G can be defined as shown in equation (9):
in the formula (9), gθ(Λ) is a learnable convolution kernel.
Due to the defined convolution kernel gθ(Λ) is a filter without parameter, the graph convolution operation has no local information, every convolution is that all nodes participate in operation, the calculation complexity is high, and the embodiment adopts the Chebyshev graph in the graph convolution neural network ChebyNetConvolutional layer Cheby is a graph convolutional layer, which parameterizes a convolution kernel by defining chebyshev polynomials of feature vector diagonal matrices as filters, and can capture local features of the graph. The chebyshev polynomial is recursively defined as: t is0(x)=1,T1(x)=x,Tk(x)=2xTk-1(x)-Tk-2(x) Fitting a convolution kernel by using a Chebyshev polynomial to obtain a Chebyshev diagram convolution definition as shown in a formula (10):
in the formula (10), the Laplace matrix is defined as For the scaled eigenvector matrix, scaling is to satisfy Chebyshev polynomial KthConditions of order truncation: the independent variable range needs to be [ -1, +1]In the meantime.
Because the number of samples of the finger vein database is generally small, in order to further reduce the total number of training parameters and prevent overfitting, in the embodiment, the maximum eigenvalue after laplacian feature decomposition in chebyshev graph convolution definition is set to be 2, graph convolution operation is further simplified, a simplified graph convolution layer SCheby is obtained, the size of a convolution kernel and the number of convolution layers are adjusted, and a layer of graph convolution layer is added, so that the nonlinear effect of a network is enhanced. Fig. 9 shows the implementation of the chebyshev convolution from left to right, with the nodes aggregating features from their own, first-order and second-order neighborhoods in turn.
For a pooling layer, the ChebyNet network provides a method for realizing the clustering algorithm based on GraclusThe method of fast pooling, but the method cannot select nodes adaptively, and the pooling process does not consider the characteristics of the graph, and the embodiment adopts a gPool fast pooling layer to replace fast pooling and an improved graph convolution network to form a graph convolution pooling module. The gPool realizes the self-adaptive selection of the node subset mainly by defining a trainable projection vector p, projecting all node features to one dimension and then executing a top k operation. For a graph G of N nodes, the information input into the network may be represented by an adjacency matrix AlAnd the feature matrix XlIndicating that l indicates the current level, given a node i and its feature vector xiScalar projection of y onto pi=xi/||p||Xl. The hierarchical propagation rule of the gPool graph pooling layer is as follows:
y=Xlpl/||pl||,
in formula (11), k is the number of nodes selected in the new graph, and the pooling rate is defined as r ═ 1-k/N × 100%. y is XlIn thatScalar projection of (a), y ═ y1,…yN]T∈RNRank (y, k) is the operation of node sorting, returning the index of the k maxima in y. y (idx) the value in y is extracted by indexing idx and then sigmoid function activation is performed. Xl(idx:) and Al(idx ) performing operations on the node feature matrix and adjacency matrix of the post-pooling reconstructed image, and finally usingAndthe point product of the projection vector p controls the information of the selected node to realize the gating operation, so that the projection vector p can be trained through back propagation.
yiThe measurement is that a node i is projected to a direction of p, information of how many nodes i can be reserved, down-sampling operation is performed on a graph node, in order to enable a pooled graph to reserve more effective information of an original graph, the embodiment proposes that single-layer projection in a gPool layer is improved into three-layer linear layer training projection vectors, the number of neurons in each layer is respectively 32, 16, 1, a Relu function is used for activation between each layer, and a parameter propagation process is as follows:
W3 Relu(W2 Relu(pX)+b2)+b3 (12)
projection vector p is trainable weight of the first layer of linear layer, the first layer does not learn bias, the improved pooling layer is called MgPool, and experiments in the following period show that MgPool has better performance compared with gPool projected by a single layer. The structure of the improved pooling layer is shown in FIG. 10.
For the Readout layer, after generating all updated node features, directly using all the node features as final features of finger veins, the storage time is large, and the matching time is long. At present, Global pooling is the most primitive and most effective method for realizing down-sampling of the graph, and a read-out layer adopts Global max pooling to obtain graph-level features, namely the final feature expression of the finger vein image:
through the improvement, the improved graph convolution neural network is obtained. The structure of the improved graph convolution neural network is shown in fig. 11. With the increase of the number of graph convolution layers, the distinguishing characteristics of the nodes become poor, that is, an over-smooth problem occurs, and subsequent learning tasks are affected.
After the graph-level features of the finger veins are obtained, the finger vein matching is realized by adopting a method for measuring the similarity of the graph-level features of different finger vein weighted graphs, and the definition of the method is as shown in formula (14):
in the formula (14), RG1And RG2Respectively representing the image-level feature vectors of two finger vein images to be matched,andn is the dimension of the feature vector, which is the corresponding mean.
Second embodiment:
based on the second embodiment of the first embodiment, experiments were performed using 10 single-modality raw finger vein images of 100 different individuals acquired and the finger vein recognition method described in the first embodiment.
The recognition system employs 1: 1, adopting ROC (Receiver Operating characteristics) as an index of system performance evaluation, wherein the ROC has an important evaluation parameter, namely EER (Equal Error Rate), and when the EER of the system is lower, the number of times of Error matching of the system is less, the classification effect of a corresponding test sample is better, and the performance of the identification system is better.
The experimental environment uses an Ubuntu64 bit operating system, the CPU is an Inter (R) core (TM) i5-8300H CPU, the main frequency is 2.30GHz, and the memory is 8 GB; the GPU is NVIDIA GeForce GTX 1050Ti, the programming language is Python3.5, and the deep learning framework is Pytroch 1.1.0.
The data set was subjected to 30-step (Epoch) iterative training using a training block (Batch) of 28 in the experiment, using an Adam optimizer with a learning rate of 0.001. In order to further avoid the overfitting phenomenon in network training, a dropout regularization method is adopted, a part of network nodes are randomly discarded in the training process, and dropout rates are all 0.5.
In the experiment, a cross entropy loss function is used for back propagation, the super pixel division number in the method is set to be 400, the convolution receptive field size of the first layer image is 9, and the pooling rate is 30%.
(1) The weighted graph structure adopted by the method of the embodiment is compared with other graph structures in identification.
Firstly, a finger vein superpixel graph is generated by using a SLIC superpixel segmentation algorithm, secondly, a finger vein weighted graph with a complete graph, a triangular subdivision graph, an 8NN graph and an area adjacency graph structure is respectively constructed and is input into a network for training, testing and identifying, and the obtained identification result is shown in table 1:
TABLE 1 System identification results and training durations under different graph structures
Picture structure | EER(%) | 307 |
Complete picture | 0.09 | 269 |
Triangular subdivision diagram | 1.83 | 295 |
8NN diagram | 0.25 | 281 |
Region adjacency graph | 1.04 | 307 |
As can be seen from table 1, the finger vein weighted graph with the complete graph structure constructed in this embodiment has better recognition performance than other graph structures, and the training time is less different from other graph structures, because compared with the number of edge sets of other partial graph structures, although the edge sets of the complete graph are more, the weight values of the edge sets are normalized, the distance of the spatial distance of the nodes is measured, and the weights of many edges with weak connection relations are extremely small, so that the calculation is not too complicated.
(2) The finger vein recognition system for constructing the finger vein weighted graph and improving the graph convolution neural network based on the superpixel segmentation proposed by the embodiment is compared with other finger vein graph model recognition systems.
In order to better evaluate the feature characterization capability and the recognition efficiency of the finger vein weighted graph recognition system provided by this embodiment, the method of this embodiment is compared with the finger vein recognition method for constructing a weighted triangulation graph based on feature similarity, in the comparison method, image segments with sizes of 9 × 9 and 15 × 15 are used to divide the image, finger vein weighted graphs with node numbers of 525 and 180 respectively are obtained in the data set, the finger vein weighted graph with the same node number is constructed in the method of this embodiment, and the experimental results are shown in table 2:
TABLE 2 comparison of different graph model-based finger vein recognition systems
As can be seen from table 2, the finger vein recognition system for constructing a finger vein weighted graph and an improved graph convolution neural network based on superpixel segmentation provided by this embodiment has higher recognition accuracy, the recognition time does not change greatly with the size of the constructed graph model, and the single-sheet recognition efficiency is higher, which means that the finger vein graph-level features extracted based on the method of this embodiment have smaller dimensions and better characterization capability.
3) Improved graph convolution neural network in contrast to other graph convolution neural networks
The improved graph convolution neural network suitable for the finger vein weighted graph data set provided by the embodiment is compared with the existing network model, and comprises four graph convolution networks of Cheby, ChebyGIN, GCN and GIN and three pooling networks of gPool, Coarse Pool and SAPOol. Cheby and Coarse Pool are graph convolution and graph pooling in ChebyNet, GCN and GIN are graph convolution methods proposed by a space domain-based method, defined aggregation functions can only aggregate information from one section of neighborhood node in each convolution, in order to ensure that the receptive fields of convolution operations of different graph convolution methods are the same, the sizes of convolution kernels in SCheby, Cheby and ChebyGIN are both 2, and experimental results are shown in table 3:
TABLE 3 comparison of recognition results of different convolutional neural networks
As can be seen from table 3, compared with the GCN, GIN and ChebyGIN graph convolution methods, the network model constructed by SCheby in the present embodiment has the lowest error rate of recognition, and the error rates of recognition are respectively 0.30% and 1.01% higher than Cheby, but the training time is respectively shortened by 38.9% and 46.6%, which indicates that the SCheby network parameters are smaller, and when the graph convolution kernel is increased, the model is not easy to be over-fitted. Compared with other pooling methods in a table, the MgPool pooling method provided by the invention has the lowest recognition equal error rate, wherein the recognition equal error rate is respectively reduced by 0.52% and 1.62% in the two databases compared with the gPool pooling method, which shows that the provided pooling method based on multilayer projection is more accurate in node selection and further improves the feature extraction discrimination.
As can be seen from table 1, table 2 and table 3, the finger vein recognition method provided by the embodiment not only has strong finger vein feature characterization capability, but also has good performance in recognition accuracy and matching efficiency.
The third embodiment:
as shown in fig. 12, the second embodiment provides a finger vein recognition apparatus including: the finger vein image preprocessing module 21 is configured to preprocess the acquired original finger vein image to obtain a preprocessed finger vein image; the finger vein weighted graph building module 22 is configured to obtain a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and build a finger vein weighted graph by using the node set; and the finger vein recognition module 23 is configured to perform finger vein recognition on the finger vein weighted graph based on the improved convolutional neural network to obtain a recognition result.
As an example, by the finger vein image preprocessing module 21, in consideration of noise such as an area with too large brightness change or a blurred texture detail area in the acquired original finger vein image, the original finger vein image is preprocessed to obtain a preprocessed finger vein image, so that a node set can be obtained from the preprocessed finger vein image with better image quality according to the SLIC superpixel segmentation algorithm.
Through the finger vein weighted graph construction module 22, when the preprocessed finger vein image is obtained, the preprocessed finger vein image is segmented according to the SLIC superpixel segmentation algorithm, a node set is constructed by taking the obtained superpixel blocks as graph nodes, and the finger vein weighted graph is constructed by using the node set.
The SLIC (simple linear iterative clustering) superpixel segmentation algorithm is to convert an image into a 5-dimensional vector under CIELAB color space and XY coordinates, construct a distance metric standard for the 5-dimensional vector, perform local clustering on image pixels to generate a superpixel block, and realize image segmentation. The SLIC superpixel segmentation algorithm can generate compact and approximately uniform superpixel blocks, and has high comprehensive evaluation in the aspects of operation speed, object contour maintenance and superpixel shape. The super-pixel block is a region formed by a series of pixel points which are adjacent in position and similar in color, gray scale, texture and other characteristics, so that effective information of an original image is reserved, the finger vein weighted graph constructed based on the super-pixel block is stable in structure and high in calculation efficiency, more perception information is contained, and a good effect is shown on expressing the randomness of the finger vein image.
Through the finger vein recognition module 23, when the finger vein weighted graph is obtained, the finger vein weighted graph is input into the improved graph convolution neural network, the depth features in the finger vein weighted graph are extracted through the improved graph convolution neural network, and feature comparison is performed on different finger vein weighted graphs to obtain a recognition result, so that finger vein recognition is realized.
In this embodiment, the finger vein weighted graph construction module 22 obtains a node set from the preprocessed original finger vein image, that is, the preprocessed finger vein image, according to the SLIC superpixel segmentation algorithm, and constructs the finger vein weighted graph by using the node set, so that the finger vein recognition module 23 performs finger vein recognition on the finger vein weighted graph based on the improved graph convolution neural network to obtain a recognition result, thereby implementing the finger vein recognition. The finger vein weighted graph is constructed by segmenting and preprocessing a finger vein image according to an SLIC superpixel segmentation algorithm and constructing a node set by taking the obtained superpixel blocks as graph nodes, and then constructing a finger vein weighted graph, the characteristics that the superpixel blocks retain effective image information and contain more sensing information can be utilized, the stability of a graph structure and the randomness of the finger vein image are comprehensively considered to construct a graph model of the finger vein image, the finger vein identification precision is improved, meanwhile, the finger vein identification is carried out on the finger vein weighted graph based on the improved graph convolution neural network, the depth characteristic of the finger vein weighted graph can be extracted by utilizing the improved graph convolution neural network, and the finger vein identification precision is further improved.
In a preferred embodiment, the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, and specifically, the preprocessing includes: extracting an interested region from the original finger vein image to obtain an interested region image; and carrying out image smoothing on the image of the region of interest to obtain a preprocessed finger vein image.
In this embodiment, a finger vein image preprocessing module 21 is used to extract a region of interest from each acquired original finger vein image, obtain a region of interest (ROI) image, normalize the sizes of all the region of interest images, for example, normalize the sizes of all the region of interest images to 91 × 200, assign a class label to all the region of interest images, assign the same class label to the region of interest images of the same finger from the same person, and, in addition, take into account that there is an area with too large brightness variation in the original finger vein image, perform image smoothing on the region of interest images, for example, perform smoothing on the region of interest images by using a gaussian filter with a standard deviation of 0.8, so as to obtain a preprocessed finger vein image.
In a preferred embodiment of this embodiment, the obtaining a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and constructing the finger vein weighted graph by using the node set specifically includes: segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing a node set by taking the obtained superpixel blocks as graph nodes; wherein, the characteristics of the nodes in the node set are the gray characteristics and the spatial characteristics of the corresponding superpixel blocks; and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the spatial intimacy between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
A weighted graph of N nodes may be represented as G ═ (V, E, a), V ═ V1,v2,…,vNIs a set of nodes, E ═ EijIs the set of edges, where eij=(vi,vj)(i≠j),A∈RN×NIs a weighted adjacency matrix of the graph, wije.A represents an edge eijThe above weight values. Defining the signal on the graph as X, and using X to be R for the node characteristic matrix of the graphN×CDenotes, defines Xi∈RCAnd C is the dimension of the node feature. The operation of the finger vein weighted graph construction module 22 for constructing the finger vein set weighted graph includes constructingBuilding a node set, calculating node characteristics, building an edge set and calculating a weight.
The SLIC superpixel segmentation algorithm generates superpixels based on improved K mean value clustering, a preprocessed finger vein image is input, a segmentation number K is given, the algorithm firstly converts the image into a CIELAB color space, the brightness L, the color value (a, b) and the space coordinate (x, y) of each pixel form a five-dimensional vector (L, a, b, x, y), and K initial clustering centers C are arranged in the imagei=[li,ai,bi,xi,yi]TThen in the intervalAnd (3) sampling on grids of pixels, wherein N is the total number of pixels, moving the clustering center to the position with the minimum gradient in a 3 x 3 neighborhood in order to avoid the clustering center falling on the contour boundary with larger gradient, and distributing a label for each pixel point in the neighborhood searched by the clustering center. Different from the search of the standard K-means clustering in the whole image, the search range of the SLIC superpixel segmentation algorithm is limited to 2S multiplied by 2S, the distance calculation is reduced, and therefore the convergence of the algorithm can be accelerated. Calculating the color distance d between each searched pixel point and the clustering centercAnd a spatial distance dsAnd normalizing to obtain a final distance metric D', and calculating formulas shown as formula (15), formula (16) and formula (17):
in formulae (15) to (17), NSS is the maximum space distance in the class, NCFor maximum colour distance, typically in the range of [1, 40 ]]Taking one fixation in intervalThe relative importance between color similarity and spatial proximity is weighed. And continuously iterating the steps until the error is converged, and finally, using a connected component algorithm to reassign discontinuous super-pixels with undersize to adjacent super-pixel regions. And after the finger vein superpixel graph is obtained through the SLIC superpixel segmentation algorithm, a superpixel block forms a graph node to obtain a node set. Wherein the characteristics of the nodes in the node set are the gray characteristics and the spatial characteristics of the corresponding superpixel blocks. Because the image acquisition light source is constant, the thicknesses of epidermis and subcutaneous tissues of different finger individuals are different, the gray characteristics in corresponding finger vein imaging are different, the gray of vein and non-vein partial areas in the same finger vein image is also different, and each super-pixel block has spatial position characteristics, the pixel intensity value of each super-pixel block is calculatedAnd the centroid coordinate C ═ xi,yi)/SmaxGray and spatial features to nodes, where NiRepresenting the total number of pixels, p, within the ith super pixelnFor each pixel value, SmaxIs the maximum value of the image size. Because the two characteristic scales are different, the z-score (zero-mean) standardization processing is carried out to obtain the node characteristic X ═ M of the final finger vein weighted graphsp,Csp]∈RN×3。
The edges of the graph describe the connection relationship between the nodes, and the weights of the edges describe the strength of the connection relationship between the nodes. Typical edge set representation constructed graphs include region adjacency graphs, K-neighbor graphs, triangular subdivision graphs and the like, which are all graph structures connected locally, the constructed graphs are highly sparse, and the graph convolution extracts little perception information. Therefore, the nodes in the node set are connected according to the graph structure of the undirected complete graph, and only one edge exists between each pair of different nodes, so that the weight w of the edgeij=wjiAnd constructing an edge set, and taking the space intimacy between corresponding nodes as the weight of the edge. The space intimacy between the corresponding nodes is calculated by the space distance of the super-pixel centroid coordinate, and the smaller the value is, the smaller the tableThe larger the distance between two superpixel centroids is, the smaller the spatial intimacy between corresponding nodes is, and the weaker the connection relationship is. The weight calculation formula is shown in formula (18):
in the formula (18), (x)i,yi) And (x)j,yj) Is the centroid coordinate of two super pixel blocks, and sigma is the value [0,1]Taking 0.1 × pi in this embodiment can obtain a better weight distribution, thereby obtaining a finger vein weighting graph.
In another preferred embodiment, the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, specifically: extracting an interested region from the original finger vein image to obtain an interested region image; and carrying out image smoothing processing and image enhancement processing on the image of the region of interest to obtain a preprocessed finger vein image.
In this embodiment, a finger vein image preprocessing module 21 is used to extract a region of interest from each acquired original finger vein image, obtain a region of interest (ROI) image, normalize the sizes of all the region of interest images, for example, normalize the sizes of all the region of interest images to 91 × 200, assign class labels to all the region of interest images, assign the same class labels to the region of interest images from the same finger of the same person, and perform image smoothing and image enhancement on the region of interest images, for example, perform smoothing on the image by using a gaussian filter with a standard deviation of 0.8 and perform enhancement on the image by using a Gabor filter based on weber's theorem, to obtain a preprocessed finger vein image.
The original finger vein image has a texture detail fuzzy area mainly because the texture detail fuzzy area is influenced by illumination in the acquisition process, a Gabor filter based on the Weber theorem is adopted to enhance the image, Gabor wavelets and the Weber principle are combined, the direction represented by the maximum difference of the Weber local descriptor is taken as the main direction of a modal image, and an image enhancement strategy is established through the Gabor filter, so that the influence of illumination change on the image quality can be weakened, the finger vein texture detail is clearer, and the image expression effect is enhanced.
In a preferred embodiment of this embodiment, the obtaining a node set from the preprocessed finger vein image according to the SLIC superpixel segmentation algorithm, and constructing the finger vein weighted graph by using the node set specifically includes: segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing a node set by taking the obtained superpixel blocks as graph nodes; wherein, the node concentration node is characterized by the directional energy distribution of the corresponding superpixel graph blocks after the rectangularization; and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the feature similarity between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
Since image segmentation on the smoothed and enhanced image shows different randomness than that on the smoothed image, the finger vein weighted graph constructed based on the SLIC superpixel segmentation algorithm is also different. Through the finger vein weighted graph building module 22, firstly, the superpixel blocks are reconstructed into rectangles according to the length, the width and the maximum edge values, overlapping parts exist between adjacent reconstructed rectangular superpixel blocks, the relevance between the blocks can be increased, then, the reconstructed superpixel rectangular blocks are mapped to the refined preprocessed finger vein image, and the OED (Oriented Energy Distribution) of the blocks is extracted as the characteristics of the nodes by using a direction adjustable Filter (Steerable Filter). From the super-pixel division result, the super-pixel regions represented by the nodes are different in shape, and in an ideal state, no matter the size of the region, the Steerable filter can be used for responding and generating energy distribution of 0-360 degrees and effectively describing the direction randomness of the graphic block, and general formulas of the Steerable filter are shown in formulas (19) to (21):
χ={E(1),E(2),…,E(θ),…,E(360)|I} (21)
in the expressions (19) to (21), f (x, y) is a base filter group composed of trigonometric functions, phijFor the filter direction, N is the number of f (x, y), k (θ) is the interpolation function, and θ is the direction of the Steerable filter. Equation (20) is a calculation equation for the energy of the directional filter, i.e., image I and filter hθThe energy in a certain direction θ, X, Y, is the size of the filter and the image I, I being the reconstructed super-pixel tile in this embodiment. The formula (21) combines the directional energies of different theta into a vector, so as to obtain the directional energy distribution characteristic of the finger vein superpixel rectangular block, and finally the node characteristic of the finger vein weighted graph is X ═ χsp]∈RN ×360。
Connecting nodes in the node set according to a graph structure of the undirected complete graph, wherein only one edge exists between each pair of different nodes, so that the weight w of the edgeij=wjiAnd constructing an edge set, and taking the feature similarity between corresponding nodes as the weight of the edge. The relevance of the adjacent superpixel blocks is high, the feature similarity is high, and the edge weight between the corresponding nodes is large; on the contrary, the correlation degree of the adjacent and distant superpixel blocks is lower, the edge weight between the corresponding nodes is smaller, and the weight calculation formula is shown as the formula (22):
in the formula (22), fi、fjAre the direction feature vectors of nodes i and j, respectively, and L is fi、fjLength of (d).
In a preferred embodiment, the finger vein weighted graph is subjected to finger vein recognition based on the improved graph convolution neural network, and a recognition result is obtained, specifically: and inputting the finger vein weighted graph into the improved graph convolution neural network, and comparing the similarity of the characteristics of all nodes in different finger vein weighted graphs through the improved graph convolution neural network to obtain an identification result.
The improved graph convolution neural network comprises a graph convolution layer, a pooling layer and a readout layer.
In a preferred embodiment of this embodiment, the map convolutional layer includes a map convolutional layer in which a convolutional kernel is defined by a chebyshev polynomial.
For the graph convolution layer, considering that the constructed finger vein weighted graph is a non-sparse graph, compared with low-order neighborhood information aggregation and layer number stacking of space domain graph convolution, the convolution kernel receptive field of a spectrogram can be expanded, and higher-order information in the graph can be extracted. The process of convolution based on the frequency domain graph is as follows: firstly, converting a graph signal into a Fourier domain according to graph Fourier transform, wherein the definition of the graph Fourier transform depends on eigenvectors and eigenvalues after graph Laplacian matrix decomposition, then performing multiplication operation on the signal in a spectrum space according to convolution theorem, and converting the signal into an original space definition convolution operator by utilizing inverse Fourier transform. For an N-node graph G ═ (V, E, a), the normalized laplacian matrix is defined as L ═ IN-D-1/2AD-1/2Wherein D ∈ RN×NIs a degree matrix, Dii=∑jAij,INIs an identity matrix. The Laplace matrix is a symmetric matrix, and characteristic decomposition is carried out on the Laplace matrix, wherein L is equal to Ulambda UTObtaining a set of orthogonal eigenvectors U ═ U0,…,uN-1]∈RN×NThe corresponding characteristic value is Λ ═ λ0,…,λN-1]∈RN×N. The fourier transform and inverse transform for the graph signal x are defined as: with the fourier transform definition of the graph, the convolution operation on the graph G can be defined as shown in equation (23):
in the formula (23), gθ(Λ) is a learnable convolution kernel.
Due to the defined convolution kernel gθThe method comprises the following steps that (Lambda) is a parameter-free filter, the graph convolution operation has no local information, all nodes participate in operation every time of convolution, and the calculation complexity is high. The chebyshev polynomial is recursively defined as: t is0(x)=1,T1(x)=x,Tk(x)=2xTk-1(x)-Tk-2(x) Fitting a convolution kernel by using a Chebyshev polynomial to obtain a Chebyshev diagram convolution definition as shown in a formula (24):
in the formula (24), the Laplace matrix is defined as For the scaled eigenvector matrix, scaling is to satisfy Chebyshev polynomial KthConditions of order truncation: the independent variable range needs to be [ -1, +1]In the meantime.
Because the number of samples of the finger vein database is generally small, in order to further reduce the total number of training parameters and prevent overfitting, in the embodiment, the maximum eigenvalue after laplacian feature decomposition in chebyshev graph convolution definition is set to be 2, graph convolution operation is further simplified, a simplified graph convolution layer SCheby is obtained, the size of a convolution kernel and the number of convolution layers are adjusted, and a layer of graph convolution layer is added, so that the nonlinear effect of a network is enhanced.
For a pooling layer, the ChebyNet network provides a method for realizing rapid pooling based on a Graclus clustering algorithm, but the method cannot adaptively select nodes, and the characteristics of a graph are not considered in the pooling process. The gPool realizes the self-adaptive selection of the node subset mainly by defining a trainable projection vector p, projecting all node features to one dimension and then executing a top k operation. For a graph G of N nodes, the information input into the network may be represented by an adjacency matrix AlAnd the feature matrix XlIndicating that l indicates the current level, given a node i and its feature vector xiScalar projection of y onto pi=xi/||p||Xl. The hierarchical propagation rule of the gPool graph pooling layer is as follows:
in the formula (25), k represents the number of nodes selected in the new graph, and the pooling rate is defined as r ═ 1-k/N × 100%. y is XlAt plScalar projection of (a), y ═ y1,…yN]T∈RNRank (y, k) is the operation of node sorting, returning the index of the k maxima in y. y (idx) the value in y is extracted by indexing idx and then sigmoid function activation is performed. Xl(idx:) and Al(idx ) performing operations on the node feature matrix and adjacency matrix of the post-pooling reconstructed image, and finally usingAndto control the selected sectionAnd (3) gating operation is realized by point information, so that the projection vector p can be trained through back propagation.
yiThe measurement is that a node i is projected to a direction of p, information of how many nodes i can be reserved, down-sampling operation is performed on a graph node, in order to enable a pooled graph to reserve more effective information of an original graph, the embodiment proposes that single-layer projection in a gPool layer is improved into three-layer linear layer training projection vectors, the number of neurons in each layer is respectively 32, 16, 1, a Relu function is used for activation between each layer, and a parameter propagation process is as follows:
W3 Relu(W2 Relu(pX)+b2)+b3 (26)
projection vector p is trainable weight of the first layer of linear layer, the first layer does not learn bias, the improved pooling layer is called MgPool, and experiments in the following period show that MgPool has better performance compared with gPool projected by a single layer.
For the Readout layer, after generating all updated node features, directly using all the node features as final features of finger veins, the storage time is large, and the matching time is long. At present, Global pooling is the most primitive and most effective method for realizing down-sampling of the graph, and a read-out layer adopts Global max pooling to obtain graph-level features, namely the final feature expression of the finger vein image:
through the improvement, the improved graph convolution neural network is obtained. With the increase of the number of graph convolution layers, the distinguishing characteristics of the nodes become poor, that is, an over-smooth problem occurs, and subsequent learning tasks are affected.
After the graph-level features of the finger veins are obtained, the finger vein matching is realized by adopting a method for measuring the similarity of the graph-level features of different finger vein weighted graphs, and the definition of the method is as shown in formula (28):
in the formula (28), RG1And RG2Respectively representing the image-level feature vectors of two finger vein images to be matched,andn is the dimension of the feature vector, which is the corresponding mean.
A third embodiment provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute the finger vein recognition method according to the first embodiment, and the same beneficial effects can be achieved.
In summary, the embodiment of the present invention has the following advantages:
according to the SLIC superpixel segmentation algorithm, a node set is obtained from the preprocessed original finger vein image, namely the preprocessed finger vein image, and a finger vein weighted graph is constructed by using the node set, so that the finger vein weighted graph is subjected to finger vein recognition based on the improved graph convolution neural network to obtain a recognition result, and the finger vein recognition is realized. The embodiment of the invention partitions and preprocesses the finger vein image according to the SLIC superpixel partition algorithm, constructs a node set by taking the obtained superpixel blocks as image nodes, further constructs a finger vein weighted graph, can utilize the characteristics that the superpixel blocks retain effective information of the image and contain more perception information, and comprehensively considers the stability of the graph structure and the randomness of the finger vein image to construct a graph model of the finger vein image, thereby improving the finger vein identification precision.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that all or part of the processes of the above embodiments may be implemented by hardware related to instructions of a computer program, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the above embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Claims (10)
1. A finger vein recognition method, comprising:
preprocessing the collected original finger vein image to obtain a preprocessed finger vein image;
acquiring a node set from the preprocessed finger vein image according to a SLIC superpixel segmentation algorithm, and constructing a finger vein weighted graph by using the node set;
and performing finger vein recognition on the finger vein weighted graph based on the improved graph convolution neural network to obtain a recognition result.
2. The finger vein recognition method according to claim 1, wherein the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, and specifically:
extracting an interested region from the original finger vein image to obtain an interested region image;
and carrying out image smoothing processing on the interested region image of the interested image to obtain the preprocessed finger vein image.
3. The finger vein recognition method according to claim 1, wherein the preprocessing is performed on the acquired original finger vein image to obtain a preprocessed finger vein image, and specifically:
extracting an interested region from the original finger vein image to obtain an interested region image;
and carrying out image smoothing processing and image enhancement processing on the image of the region of interest to obtain the preprocessed finger vein image.
4. The finger vein recognition method according to claim 2, wherein the node set is obtained from the preprocessed finger vein image according to a SLIC superpixel segmentation algorithm, and the node set is used to construct a finger vein weighted graph, specifically:
segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing the node set by taking the obtained superpixel blocks as graph nodes; wherein the characteristics of the nodes in the node set are the gray characteristics and the spatial characteristics corresponding to the superpixel blocks;
and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the spatial intimacy between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
5. The finger vein recognition method according to claim 3, wherein the node set is obtained from the preprocessed finger vein image according to a SLIC superpixel segmentation algorithm, and the node set is used to construct a finger vein weighted graph, specifically:
segmenting the preprocessed finger vein image according to an SLIC superpixel segmentation algorithm, and constructing the node set by taking the obtained superpixel blocks as graph nodes; wherein the nodes in the node set are characterized by the directional energy distribution corresponding to the superpixel graph blocks after the nodes are rectangular;
and connecting the nodes in the node set according to the graph structure of the undirected complete graph, constructing an edge set, and taking the feature similarity between the corresponding nodes as the weight of the edge to obtain the finger vein weighted graph.
6. The finger vein recognition method according to claim 1, wherein the finger vein recognition is performed on the finger vein weighted graph based on the improved graph convolution neural network to obtain a recognition result, specifically:
and inputting the finger vein weighted graph into the improved graph convolution neural network, and comparing the similarity of the characteristics of all nodes in different finger vein weighted graphs through the improved graph convolution neural network to obtain the identification result.
7. The finger vein recognition method of claim 1, wherein the modified atlas neural network comprises an atlas layer, a pooling layer, and a readout layer.
8. The finger vein recognition method of claim 7, wherein the map convolutional layer comprises a map convolutional layer in which a convolutional kernel is defined by using a chebyshev polynomial.
9. A finger vein recognition device, comprising:
the finger vein image preprocessing module is used for preprocessing the acquired original finger vein image to obtain a preprocessed finger vein image;
the finger vein weighted graph construction module is used for acquiring a node set from the preprocessed finger vein image according to a SLIC super-pixel segmentation algorithm and constructing a finger vein weighted graph by using the node set;
and the finger vein recognition module is used for carrying out finger vein recognition on the finger vein weighted graph based on the improved convolutional neural network to obtain a recognition result.
10. A computer-readable storage medium, comprising a stored computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the finger vein recognition method according to claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010749346.7A CN111950406A (en) | 2020-07-28 | 2020-07-28 | Finger vein identification method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010749346.7A CN111950406A (en) | 2020-07-28 | 2020-07-28 | Finger vein identification method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111950406A true CN111950406A (en) | 2020-11-17 |
Family
ID=73338188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010749346.7A Pending CN111950406A (en) | 2020-07-28 | 2020-07-28 | Finger vein identification method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111950406A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200156A (en) * | 2020-11-30 | 2021-01-08 | 四川圣点世纪科技有限公司 | Vein recognition model training method and device based on clustering assistance |
CN112560710A (en) * | 2020-12-18 | 2021-03-26 | 北京曙光易通技术有限公司 | Method for constructing finger vein recognition system and finger vein recognition system |
CN112801060A (en) * | 2021-04-07 | 2021-05-14 | 浙大城市学院 | Motion action recognition method and device, model, electronic equipment and storage medium |
CN113254648A (en) * | 2021-06-22 | 2021-08-13 | 暨南大学 | Text emotion analysis method based on multilevel graph pooling |
CN113591629A (en) * | 2021-07-16 | 2021-11-02 | 深圳职业技术学院 | Finger three-mode fusion recognition method, system, device and storage medium |
CN114155193A (en) * | 2021-10-27 | 2022-03-08 | 北京医准智能科技有限公司 | Blood vessel segmentation method and device based on feature enhancement |
CN116030352A (en) * | 2023-03-29 | 2023-04-28 | 山东锋士信息技术有限公司 | Long-time-sequence land utilization classification method integrating multi-scale segmentation and super-pixel segmentation |
-
2020
- 2020-07-28 CN CN202010749346.7A patent/CN111950406A/en active Pending
Non-Patent Citations (2)
Title |
---|
ZIYUN YE等: "Weighted Graph Based Description for Finger-Vein Recognition", 《BIOMETRIC RECOGNITION:12 CHINESE CONFERENCE,CCBR 2017》, pages 2 * |
李冉等: "改进GCNs 在指静脉特征表达中的应用", 《信号处理》, vol. 36, no. 4, pages 2 - 3 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200156A (en) * | 2020-11-30 | 2021-01-08 | 四川圣点世纪科技有限公司 | Vein recognition model training method and device based on clustering assistance |
CN112560710A (en) * | 2020-12-18 | 2021-03-26 | 北京曙光易通技术有限公司 | Method for constructing finger vein recognition system and finger vein recognition system |
CN112560710B (en) * | 2020-12-18 | 2024-03-01 | 北京曙光易通技术有限公司 | Method for constructing finger vein recognition system and finger vein recognition system |
CN112801060A (en) * | 2021-04-07 | 2021-05-14 | 浙大城市学院 | Motion action recognition method and device, model, electronic equipment and storage medium |
CN113254648A (en) * | 2021-06-22 | 2021-08-13 | 暨南大学 | Text emotion analysis method based on multilevel graph pooling |
CN113254648B (en) * | 2021-06-22 | 2021-10-22 | 暨南大学 | Text emotion analysis method based on multilevel graph pooling |
CN113591629A (en) * | 2021-07-16 | 2021-11-02 | 深圳职业技术学院 | Finger three-mode fusion recognition method, system, device and storage medium |
CN113591629B (en) * | 2021-07-16 | 2023-06-27 | 深圳职业技术学院 | Finger tri-modal fusion recognition method, system, device and storage medium |
CN114155193A (en) * | 2021-10-27 | 2022-03-08 | 北京医准智能科技有限公司 | Blood vessel segmentation method and device based on feature enhancement |
CN116030352A (en) * | 2023-03-29 | 2023-04-28 | 山东锋士信息技术有限公司 | Long-time-sequence land utilization classification method integrating multi-scale segmentation and super-pixel segmentation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuan et al. | Factorization-based texture segmentation | |
CN111950406A (en) | Finger vein identification method, device and storage medium | |
Montazer et al. | An improved radial basis function neural network for object image retrieval | |
Miao et al. | Local segmentation of images using an improved fuzzy C-means clustering algorithm based on self-adaptive dictionary learning | |
CN110163239B (en) | Weak supervision image semantic segmentation method based on super-pixel and conditional random field | |
Venugopal | Automatic semantic segmentation with DeepLab dilated learning network for change detection in remote sensing images | |
CN110659665B (en) | Model construction method of different-dimension characteristics and image recognition method and device | |
CN111242208A (en) | Point cloud classification method, point cloud segmentation method and related equipment | |
CN110458192B (en) | Hyperspectral remote sensing image classification method and system based on visual saliency | |
CN106157330B (en) | Visual tracking method based on target joint appearance model | |
Liu et al. | SAR image segmentation based on hierarchical visual semantic and adaptive neighborhood multinomial latent model | |
CN109840518B (en) | Visual tracking method combining classification and domain adaptation | |
Zhang et al. | Polygon structure-guided hyperspectral image classification with single sample for strong geometric characteristics scenes | |
Guo et al. | Dual graph U-Nets for hyperspectral image classification | |
Zhao et al. | Hyperspectral image classification based on graph transformer network and graph attention mechanism | |
CN110135435B (en) | Saliency detection method and device based on breadth learning system | |
Scabini et al. | Spatio-spectral networks for color-texture analysis | |
CN110264482B (en) | Active contour segmentation method based on transformation matrix factorization of noose set | |
CN115909016A (en) | System, method, electronic device, and medium for analyzing fMRI image based on GCN | |
Florindo et al. | Texture descriptors by a fractal analysis of three-dimensional local coarseness | |
Scharfenberger et al. | Image saliency detection via multi-scale statistical non-redundancy modeling | |
Raj et al. | Spatial granule based clustering technique for hyperspectral images | |
CN115205308A (en) | Fundus image blood vessel segmentation method based on linear filtering and deep learning | |
CN114677530A (en) | Clustering algorithm effectiveness evaluation method, device and medium based on wavelet shape descriptor | |
Johnson et al. | A study on eye fixation prediction and salient object detection in supervised saliency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201117 |