WO2022052701A1 - 基于多模态静脉图像性别信息异构分离的身份识别方法 - Google Patents

基于多模态静脉图像性别信息异构分离的身份识别方法 Download PDF

Info

Publication number
WO2022052701A1
WO2022052701A1 PCT/CN2021/111271 CN2021111271W WO2022052701A1 WO 2022052701 A1 WO2022052701 A1 WO 2022052701A1 CN 2021111271 W CN2021111271 W CN 2021111271W WO 2022052701 A1 WO2022052701 A1 WO 2022052701A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
vein
vein image
image
encoding
Prior art date
Application number
PCT/CN2021/111271
Other languages
English (en)
French (fr)
Inventor
王军
潘在宇
李玉莲
申政文
陈晓玲
Original Assignee
中国矿业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国矿业大学 filed Critical 中国矿业大学
Publication of WO2022052701A1 publication Critical patent/WO2022052701A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the present invention relates to the field of hand vein identification, in particular to an identification method based on heterogeneous separation of gender information of multimodal vein images.
  • Vein images are gradually becoming one of the mainstream biometric identification methods due to their potential in vivo imaging, stability and resistance to external interference.
  • the identification based on vein images can be divided into two parts: gender attribute judgment and identity attribute judgment.
  • the traditional feature extraction method used in identity recognition has not achieved good results in gender recognition, so on the basis of vein images, an unsupervised sparse feature learning algorithm is proposed for the first time.
  • the unsupervised sparse feature feature learning model can relatively effectively find the gender spatial feature distribution characteristics of vein images, ensure the sparsity of the feature vector distribution, and achieve a good gender recognition effect.
  • Binary feature coding is widely used due to its high computational efficiency, and the extracted coding features have strong classification ability and other features. In addition to traditional image texture information measurement and description, it has been widely used in other visual tasks. Such as face recognition, pedestrian detection, vein recognition, etc.
  • the traditional binary feature encoding is robust to illumination changes during feature extraction.
  • Ojala et al. extended the traditional binary feature encoding to the processing neighborhood of any shape and size, and proposed a circular binary feature encoding encoding mode. Based on this improvement, a rotation-invariant texture coding method is proposed. Soon, Ojala et al. proposed a Uniform LBP mode by reducing the redundant mode of binary feature encoding to obtain a low-dimensional, densely distributed feature space.
  • the above feature encoding modes all use the central pixel of the selected area as the binarization threshold for encoding, which has the defect of being sensitive to pixel noise and easily losing the contrast of the encoded area.
  • the main content of the present invention is an identity recognition method based on the heterogeneous separation of gender information of multimodal vein images.
  • the identity recognition is divided into two steps: gender attribute classification recognition and identity attribute classification recognition, which can effectively improve the performance of vein feature extraction. Targeted, a robust identification model is obtained.
  • the technical solution for realizing the purpose of the present invention is: an identification method based on the heterogeneous separation of gender information of multimodal vein images, comprising the following steps:
  • Step 1 Use the dorsal hand vein collection device to collect K pieces of human dorsal hand vein images, extract the effective area of the human dorsal hand vein images, and perform zero-mean normalization to obtain K pieces of vein images with a pixel size of n ⁇ m, wherein Male and female dorsal hand vein images are each K/2, 100 ⁇ K ⁇ 10000, 100 ⁇ n ⁇ 10000, 100 ⁇ m ⁇ 10000, perform secondary normalization processing on the vein images, obtain the vein image database, as an input matrix for a two-layer unsupervised sparse feature learning model Go to step 2;
  • Step 2 Build a two-layer unsupervised sparse feature learning model, and use the input matrix of the two-layer unsupervised sparse feature learning model Input a two-layer unsupervised sparse feature learning model for training, and obtain a sparse feature representation distributed in the gender determination space of vein information, which is called a sparse feature matrix of the dorsal hand vein image, and input it into a linear SVM for classification to obtain gender label information.
  • the dorsal hand vein image of namely the male dorsal hand vein image and the female dorsal hand vein image, go to step 3 and step 4 at the same time;
  • Step 3 directly perform feature extraction on the male dorsal hand vein image, calculate the binary feature encoding value and encoding weight value of the maximum inter-class variance in the current neighborhood, and obtain the feature distribution histogram of the male vein image, and go to step 5;
  • Step 4 Directly perform feature extraction on the female dorsal hand vein image, calculate the binary feature coding value of the maximum inter-class variance in the current neighborhood, and then perform contrast enhancement processing on the female dorsal hand vein image, calculate the maximum inter-class variance binary coding weight, and classify the female
  • the feature coding value and coding weight are combined to obtain the feature coding value of the female current neighborhood, and then the feature distribution histogram of the female vein image is obtained, and go to step 6;
  • Step 5 Use the Chi-square distance method to perform histogram matching on the feature distribution histogram of the male vein image and the vein image database, calculate the similarity of the feature vector, and obtain the identity authentication result;
  • Step 6 Use the Chi-square distance method to perform histogram matching on the feature distribution histogram of the female vein image and the vein image database, calculate the similarity of the feature vector, and obtain the identity authentication result.
  • the present invention has the following significant advantages:
  • the first step information of vein image classification is obtained, which effectively improves the pertinence of vein feature extraction.
  • Figure 1 is a diagram of a gender-dependent multimodal vein image information encoding model.
  • Figure 2 is the result image of the ROI extraction method, in which Figure (a) is the original vein image, Figure (b) is the ROI positioning image, and Figure (c) is the 180*180 size vein image input to the double-layer unsupervised sparse feature learning model .
  • Figure 3 is a schematic diagram of the distribution of the corresponding feature matrix for the objective function optimization based on l 2 regularization and l 1 penalty term.
  • Figure 4 is a schematic diagram of the effect of the double-layer unsupervised sparse feature learning model on the distribution of gray matrix, in which Figure (a) is a schematic diagram of the distribution of vein image samples without gender information separation, and Figure (b) is a vein image with gender information separation. Schematic diagram of sample distribution.
  • Fig. 5 is a graph showing the result of extraction of male vein image features.
  • FIG. 6 is a graph showing the result of feature extraction of female vein images.
  • Figure 7(a) is a comparison chart of the equal error rate results of the maximum inter-class variance binary feature encoding model and the LIF model on the male vein bank.
  • Figure 7(b) is a comparison chart of the error rate results of female-like images after contrast enhancement processing between the binary feature encoding model with maximum inter-class variance and the LIF model.
  • Figure 7(c) is a comparison chart of the global vein sample image equal error rate results.
  • an identification method based on heterogeneous separation of gender information from multimodal vein images the steps are as follows:
  • Step 1 Use the dorsal hand vein collection device to collect K pieces of human dorsal hand vein images, extract the effective area of the human dorsal hand vein images, and perform zero-mean normalization to obtain K pieces of vein images with a pixel size of n ⁇ m, wherein Male and female dorsal hand vein images are each K/2, 100 ⁇ K ⁇ 10000, 100 ⁇ n ⁇ 10000, 100 ⁇ m ⁇ 10000, and the vein images are subjected to secondary normalization to obtain a vein image database.
  • an input matrix for a two-layer unsupervised sparse feature learning model details as follows:
  • X p represents the vein image after zero-mean normalization
  • is a constant term
  • j represents the gender
  • T represents the normalized dimension
  • Step 2 Build a two-layer unsupervised sparse feature learning model, and use the input matrix of the two-layer unsupervised sparse feature learning model Input a two-layer unsupervised sparse feature learning model for training, and obtain a sparse feature representation distributed in the gender determination space of vein information, which is called a sparse feature matrix of the dorsal hand vein image, and input it into a linear SVM for classification to obtain gender label information.
  • the dorsal vein images of the male and female are as follows:
  • Step 2-1 the input matrix of the two-layer unsupervised sparse feature learning model Enter the two-layer unsupervised sparse feature learning model, and go to step 2-2;
  • Step 2-2 the two-layer unsupervised sparse feature learning model pairs the input matrix Row distribution with l2 regularization Then on the input matrix Column distribution with l2 regularization where f j represents the row distribution of the input matrix; Indicates the column distribution of the input matrix, go to step 2-3;
  • Step 2-3 the objective function of building the model based on the l1 penalty term: Go to step 2-4;
  • Step 2-4 continuously minimize the objective function based on the L-BFGS method, obtain the output matrix after the optimization of the first-layer network structure, and go to step 2-5;
  • Step 2-5 take the output matrix after the optimization of the first layer of network structure as the input, and return to step 2-1 until the output matrix after the optimization of the second layer of network structure is obtained, which is the sparse feature matrix of the dorsal hand vein image, Go to steps 2-6;
  • Steps 2-6 Input the sparse feature matrix of the dorsal hand vein image into the linear SVM for classification, and obtain the dorsal hand vein image with gender label information, that is, the male dorsal hand vein image and the female dorsal hand vein image.
  • Step 3 directly perform feature extraction on the male dorsal hand vein image, calculate the binary feature encoding value and encoding weight value of the maximum inter-class variance in the current neighborhood, and obtain the feature distribution histogram of the male vein image.
  • the specific formula is as follows:
  • ri represents the pixel to be encoded
  • represents the gray value of the pixel to be encoded
  • P represents the number of adjacent pixels
  • P P 0 +P 1
  • P 0 represents the number of pixels encoded as 0
  • P 1 represents the number of pixels encoded as 1
  • i represents the pixel number.
  • the feature extraction is directly performed on the male dorsal hand vein image, and the coding weight of the maximum inter-class variance of the current neighborhood is calculated, as follows:
  • the encoding weight in the binary feature encoding model with maximum inter-class variance is the pixel value when the inter-class variance of the classified pixel set takes the maximum value.
  • the calculation formula is as follows:
  • ⁇ * is the encoding weight of the binary feature encoding with the largest inter-class variance
  • ⁇ 2 represents the overall variance of the neighborhood
  • represents the gray value of the pixel to be encoded.
  • Step 4 Directly perform feature extraction on the female dorsal hand vein image, calculate the binary feature coding value of the maximum inter-class variance in the current neighborhood, and then perform contrast enhancement processing on the female dorsal hand vein image, calculate the maximum inter-class variance binary coding weight, and classify the female
  • the feature coding value and coding weight are combined to obtain the feature coding value of the female current neighborhood, and then the feature distribution histogram of the female vein image is obtained,
  • step 3 and step 4 feature extraction is directly performed on the male/female dorsal hand vein images, and the binary feature encoding value of the maximum inter-class variance in the current neighborhood is calculated, as follows:
  • the said dorsal vein image directly maximizes the inter-class variance binary feature code value to obtain the feature code value:
  • m ⁇ n represents the size of the input male dorsal vein image
  • a i represents encoding value
  • w i represents the image encoding weight
  • i represents the pixel point number.
  • the contrast enhancement process is performed on the female dorsal hand vein image, and the binary coding weight with the maximum inter-class variance is calculated, as follows:
  • m ⁇ n represents the size of the input vein image
  • B i represents feature encoding value
  • w iCE represents the image encoding weight after AHE processing
  • i represents the pixel point number.
  • Step 5 Use the Chi-square distance method to perform histogram matching on the feature distribution histogram of the male vein image and the vein image database, calculate the similarity of the feature vector, and obtain the identity authentication result;
  • Step 6 Use the Chi-square distance method to perform histogram matching on the feature distribution histogram of the female vein image and the vein image database, calculate the similarity of the feature vector, and obtain the identity authentication result.
  • the Chi-square distance method is used to perform histogram matching on the male/female vein image feature distribution histogram and the vein image database, and the similarity of the feature vector is calculated to obtain the identity authentication result, as follows:
  • the Chi-square distance method counts the eigenvector similarity of males/females, which is performed by the following formula:
  • n and m represent the vein image size
  • i' and j' represent the currently calculated image pixels.
  • D represents the vein image feature vector to be identified
  • F is the vein feature vector calculated in the vein image library.
  • K is the number of vein images in the vein image library
  • k' represents the vein image feature vector in the currently calculated vein image library.
  • D is the feature vector obtained by statistics of the feature code values A and B calculated in the above steps 3 and 4.
  • the size can judge the matching degree of the vein image to be identified and the vein image in the vein image library, so as to determine the identity of the vein image to be identified.
  • the steps are as follows:
  • Step 1 Extract the effective area from the image of the vein image library, and the result is shown in Figure 2.
  • the 6000 dorsal hand vein sample images extracted from the effective area were normalized to obtain a 180*180 vein image matrix. reuse The function generates the input matrix for a two-layer unsupervised sparse feature learning model.
  • Step 2 Input the generated feature matrix into a two-layer unsupervised sparse feature learning model, and perform l 2 regularization processing of row distribution and column distribution respectively on the input matrix, so that the distribution of the input matrix is given by Transform to That is, it maps to the l 2 regularization matrix unit surface distribution.
  • Figure 3 shows the change process of the output feature matrix with multiple sparse characteristics after l2 regularization.
  • the L-BFGS minimization method is used to optimize the objective function until the model converges, and the optimized solution of the network is obtained.
  • the output matrix after the above-mentioned first-layer network structure optimization solution is used as input for secondary training to obtain a sparse feature matrix.
  • relatively more items with a value of 0 are reserved.
  • the obtained sparse feature matrix is input into linear SVM for classification, and the final gender determination result is obtained.
  • the present invention uses a high-efficiency two-layer unsupervised sparse feature learning model to group the vein images of men and women of different qualities, obtains subjective visual consistent vein quality classification results (HQ and LQ groups), and then uses the improved maximum inter-class variance
  • the binary feature encoding model performs feature extraction, and the feature encoding value extracted for high-quality vein images is defined as And the feature code value of low-quality vein image extraction is defined as After obtaining valid feature encoding values, perform l 1 regularization on the vector To further improve its discriminativeness, the regularized feature encoding values are then cascaded to obtain the final vein image Histogram feature descriptor.
  • Step 3 adopt The local neighborhood binary coding feature extraction is directly performed on the male vein images of the HQ group, and the specific coding results are as follows:
  • FIG. 5 is a feature histogram distribution of two different input vein images of the HQ group obtained based on this coding rule. It can be seen from the feature histogram shown in Figure 5 that the two images are highly distinguishable, which fully proves that The effectiveness of the model to extract features.
  • Step 4 The feature extraction of the LQ group is as follows: for each input image of the LQ group, adaptive histogram equalization is used to enhance the contrast, and the enhanced image is processed. Feature extraction; at the same time, for vein images without contrast enhancement Feature extraction, combining the weights of the feature representation of the contrast-enhanced image and the feature encoding of the image without contrast-enhanced processing to obtain the final feature representation result
  • the specific calculation results are as follows:
  • m ⁇ n represents the size of the input vein image
  • B i represents encoding value
  • w iCE represents the image encoding weight after AHE processing
  • i represents the pixel point number.
  • Steps 5 to 6 After performing l1 normalization on the obtained pixel point feature code values of the HQ combination LQ group, they are cascaded to obtain a feature histogram descriptor. In the vein image matching stage, a Chi-square distance method is proposed to calculate the similarity of feature vectors and obtain the final matching result.
  • the DLBP and LIF models (SIFT and its improved models SURF, ASIFT, RootSIFT) were performed on the vein image database, etc.
  • the comparison of the error rate results is shown in Figures 7(a), 7(b) and 7(c).
  • the results show that in the male vein image library, the female vein image library after contrast preprocessing and the global vein image library, the improved maximum inter-class variance binary feature encoding model proposed by the present invention has achieved higher equal error rates, indicating that The effectiveness of this model in feature extraction and generation strategies.

Abstract

本发明公开了一种基于多模态静脉图像性别信息异构分离的身份识别方法,依赖于双层无监督稀疏特征学习模型和改进的最大类间方差二进制特征编码模型。本发明涉及计算机视觉领域,包括自行构建静脉图像库,无监督稀疏特征学习对人体手背静脉图像进行性别信息异构分离,在性别属性判断的基础上进行特征提取,分别计算图像二进制特征编码值和编码权值,再进行特征向量相似度判断,身份识别等步骤。本发明公开的基于多模态静脉图像性别信息异构分离的身份认证方法,通过对静脉图像进行性别信息分离,用性别标签引导不同特征提取策略来进行身份认证,提高了身份识别的准确性,能很好地满足准确性要求较高的各类身份识别应用需求。

Description

基于多模态静脉图像性别信息异构分离的身份识别方法 技术领域
本发明手部静脉识别领域,尤其涉及一种基于多模态静脉图像性别信息异构分离的身份识别方法。
背景技术
静脉图像由于其潜在的体内成像、稳定不变及抗外界干扰等特性正逐步成为主流的生物特征识别方式之一。基于静脉图像的身份识别可分为性别属性判断和身份属性判断两部分进行。
将身份识别所用的传统特征提取方法用于性别识别没有取得很好的效果,因此在静脉图像的基础上,首次提出无监督稀疏特征学习算法。无监督稀疏特征特征学习模型可以相对有效的找到静脉图像性别空间特征分布特性,保证特征向量分布的稀疏性,达到良好的性别识别效果。
二进制特征编码由于具有较高的计算效率,所提取的编码特征具有很强的分类能力等特征被广泛应用,除用于传统的图像纹理信息度量和描述外,已被广泛应用于其他视觉任务,如人脸识别,行人检测,静脉识别等。传统的二进制特征编码在进行特征提取时具有对光照变化的鲁棒性,Ojala等将传统二进制特征编码推广至任意形状和大小的处理邻域范围,提出了一种圆形二进制特征编码编码模式,又在此基础上进行改进,提出一种旋转不变纹理编码方法。不久,Ojala等又通过减少二进制特征编码冗余模式提出一种Uniform LBP模式,得到低维、密集分布特征空间。但是以上特征编码模式都是以选定领域的中心像素为二值化阈值进行编码,存在对像素噪声敏感,容易丢失编码区域对比度的缺陷。
发明内容
本发明的主要内容为一种基于多模态静脉图像性别信息异构分离的身份识别方法,将身份识别分为性别属性分类识别和身份属性分类识别两步进行,可以有效的提高静脉特征提取的针对性,获得了具有鲁棒性的身份识别模型。
实现本发明目的的技术解决方案为:一种基于多模态静脉图像性别信息异构分离的身份识别方法,包括以下步骤:
步骤1、使用手背静脉采集装置采集K幅人体手背静脉图像,对人体手背静脉图像进行有效区域提取,并进行零-均值归一化,得到像素大小为n×m的K幅的静脉图像,其中男性和女性手背静脉图像各K/2幅,100<K<10000,100<n<10000,100<m< 10000,对静脉图像进行二次归一化处理,获取静脉图像库,将静脉图像库作为双层无监督稀疏特征学习模型的输入矩阵
Figure PCTCN2021111271-appb-000001
转入步骤2;
步骤2、构建双层无监督稀疏特征学习模型,将双层无监督稀疏特征学习模型的输入矩阵
Figure PCTCN2021111271-appb-000002
输入双层无监督稀疏特征学习模型进行训练,得到稀疏且分布于静脉信息性别判定空间的特征表示,称为手背静脉图像的稀疏特征矩阵,将其输入线性SVM进行分类,得到带有性别标签信息的手背静脉图像,即男性手背静脉图像和女性手背静脉图像,同时转入步骤3和步骤4;
步骤3、对男性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值和编码权值,得到男性静脉图像特征分布直方图,转到步骤5;
步骤4、对女性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值,再对女性手背静脉图像进行对比度增强处理,计算最大类间方差二进制编码权值,将女性的特征编码值和编码权值组合,得到女性当前邻域特征编码值,进而得到女性静脉图像特征分布直方图,转入步骤6;
步骤5、采用Chi-square distance方法对男性静脉图像特征分布直方图和静脉图像库进行直方图匹配,计算特征向量相似度,得到身份认证结果;
步骤6、采用Chi-square distance方法对女性静脉图像特征分布直方图和静脉图像库进行直方图匹配,计算特征向量相似度,得到身份认证结果。
本发明与现有技术相比,其显著优点在于:
(1)、通过男女性别信息的分离,得到静脉图像分类的第一步信息,有效的提高静脉特征提取的针对性。
(2)、首次提出的双层无监督稀疏特征学习模型使手背静脉识别模型准确率得到提高。
(3)、通过将性别特征作为指导不同特征提取策略来进行身份认证,提高了身份识别的准确性,能很好地满足以特征学习和特征提取为基础的准确性要求较高的各类身份识别应用需求。
(4)、采用Chi-square distance方法对特征分布直方图进行直方图匹配,计算特征向量相似度,减少模型的计算量。
附图说明
图1为性别依赖多模态静脉图像信息编码模型图。
图2为ROI提取方法结果图像,其中图(a)为原始静脉图像,图(b)为ROI定位图像,图(c)为输入双层无监督稀疏特征学习模型的180*180大小的静脉图像。
图3为基于l 2正则化和l 1惩罚项构建目标函数优化对应特征矩阵分布示意图。
图4为双层无监督稀疏特征学习模型对灰度矩阵分布作用效果示意图,其中图(a)为未进行性别信息分离的静脉图像样本分布示意图,图(b)为经过性别信息分离的静脉图像样本分布示意图。
图5为男性静脉图像特征提取结果图。
图6为女性静脉图像特征提取结果图。
图7(a)为最大类间方差二进制特征编码模型和LIF模型在男性静脉库上等误率结果对比图。
图7(b)为最大类间方差二进制特征编码模型和LIF模型在对比度增强处理后的女性样图像等误率结果对比图。
图7(c)为全局静脉样本图像等误率结果对比图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面对本发明实施方式作进一步地详细描述。
结合图1,一种基于多模态静脉图像性别信息异构分离的身份识别方法,步骤如下:
步骤1、使用手背静脉采集装置采集K幅人体手背静脉图像,对人体手背静脉图像进行有效区域提取,并进行零-均值归一化,得到像素大小为n×m的K幅的静脉图像,其中男性和女性手背静脉图像各K/2幅,100<K<10000,100<n<10000,100<m<10000,对静脉图像进行二次归一化处理,获取静脉图像库,将静脉图像库作为双层无监督稀疏特征学习模型的输入矩阵
Figure PCTCN2021111271-appb-000003
具体如下:
使用手背静脉采集装置采集K幅人体手背静脉图像,对人体手背静脉图像进行有效区域提取,并通过零-均值归一化,得到像素大小为n×m的K幅的静脉图像,再通过公式
Figure PCTCN2021111271-appb-000004
进行二次归一化处理,获取静脉图像,其中X p表示经过零-均值归一化后的静脉图像,
Figure PCTCN2021111271-appb-000005
表示双层无监督稀疏特征学习模型的输入矩阵,∈为常数项,∈=10 8
Figure PCTCN2021111271-appb-000006
为是归一化矩阵,j表示性别,T表示归一化的维度。
转入步骤2。
步骤2、构建双层无监督稀疏特征学习模型,将双层无监督稀疏特征学习模型的输入矩阵
Figure PCTCN2021111271-appb-000007
输入双层无监督稀疏特征学习模型进行训练,得到稀疏且分布于静脉信息性别判定空间的特征表示,称为手背静脉图像的稀疏特征矩阵,将其输入线性SVM进行分类,得到带有性别标签信息的手背静脉图像,即男性手背静脉图像和女性手背静脉图像,具体如下:
步骤2-1、将双层无监督稀疏特征学习模型的输入矩阵
Figure PCTCN2021111271-appb-000008
输入双层无监督稀疏特征学习模型,转入步骤2-2;
步骤2-2、双层无监督稀疏特征学习模型对输入矩阵
Figure PCTCN2021111271-appb-000009
行分布进行l 2正则化
Figure PCTCN2021111271-appb-000010
Figure PCTCN2021111271-appb-000011
再对输入矩阵
Figure PCTCN2021111271-appb-000012
列分布进行l 2正则化
Figure PCTCN2021111271-appb-000013
其中f j表示输入矩阵行分布;
Figure PCTCN2021111271-appb-000014
表示输入矩阵列分布,转入步骤2-3;
步骤2-3、基于l 1惩罚项的构建模型的目标函数:
Figure PCTCN2021111271-appb-000015
转入步骤2-4;
步骤2-4、基于L-BFGS方法不断最小化目标函数,得到第一层网络结构优化求解后的输出矩阵,转入步骤2-5;
步骤2-5、将上述第一层网络结构优化求解后的输出矩阵作为输入,返回步骤2-1直到得到第二层网络结构优化求解后的输出矩阵,即为手背静脉图像的稀疏特征矩阵,转入步骤2-6;
步骤2-6、将手背静脉图像的稀疏特征矩阵输入线性SVM进行分类,得到带有性别标签信息的手背静脉图像,即男性手背静脉图像和女性手背静脉图像。
同时转入步骤3和步骤4。
步骤3、对男性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值和编码权值,得到男性静脉图像特征分布直方图,具体公式如下:
Figure PCTCN2021111271-appb-000016
其中均值μ 0和均值μ 1,如下式:
Figure PCTCN2021111271-appb-000017
Figure PCTCN2021111271-appb-000018
其中r i代表待编码像素点,τ代表待编码像素点灰度值,邻域像素数P=P 0+P 1,P 0代表编码为0的像素数,P 1代表编码为1的像素数,i表示像素点编号。
其中逻辑判断函数S(x):
Figure PCTCN2021111271-appb-000019
进一步地,对男性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差的编码权值,具体如下:
最大类间方差二进制特征编码模型中的编码权值为分类后的像素集类间方差取最大值时的像素值,计算公式如下:
Figure PCTCN2021111271-appb-000020
其中,τ *为编码权值,
Figure PCTCN2021111271-appb-000021
为邻域二值化后类间方差,r i为待编码像素点,i表示像素点编号;P为邻域像素数,P=P 0+P 1,P 0代表编码为0的像素数,P 1代表编码为1的像素数。
其中,类间方差
Figure PCTCN2021111271-appb-000022
如下式:
Figure PCTCN2021111271-appb-000023
实现的有效特征选择的权重求解如下式
Figure PCTCN2021111271-appb-000024
其中ω *为最大类间方差二进制特征编码的编码权值,σ 2代表邻域整体方差,
Figure PCTCN2021111271-appb-000025
表示邻域二值化后类间方差,C是为了保证优化权重解的数值稳定性添加的常数,设置C=0.01 2;τ代表待编码像素点灰度值。
转到步骤5。
步骤4、对女性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值,再对女性手背静脉图像进行对比度增强处理,计算最大类间方差二进制编码权值,将女性的特征编码值和编码权值组合,得到女性当前邻域特征编码值,进而得到女性静脉图像特征分布直方图,
步骤3和步骤4中,对男性/女性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值,具体如下:
所述对手背静脉图像直接最大类间方差二进制特征编码值,获得特征编码值:
Figure PCTCN2021111271-appb-000026
其中m×n表示输入男性手背静脉图像尺寸,A i表示
Figure PCTCN2021111271-appb-000027
编码值,w i表示图像编码权值,
Figure PCTCN2021111271-appb-000028
表示对应像素点特征编码值,i表示像素点编号。
所述对女性手背静脉图像进行对比度增强处理,计算最大类间方差二进制编码权值,具体如下:
最大类间方差二进制编码公式如下:
Figure PCTCN2021111271-appb-000029
式中m×n表示输入静脉图像尺寸,B i表示
Figure PCTCN2021111271-appb-000030
特征编码值,w iCE表示经过AHE处理后图像编码权值,
Figure PCTCN2021111271-appb-000031
表示对应像素点特征编码值,i表示像素点编号。
转入步骤6。
步骤5、采用Chi-square distance方法对男性静脉图像特征分布直方图和静脉图像库进行直方图匹配,计算特征向量相似度,得到身份认证结果;
步骤6、采用Chi-square distance方法对女性静脉图像特征分布直方图和静脉图像库进行直方图匹配,计算特征向量相似度,得到身份认证结果。
步骤5和步骤6中,采用Chi-square distance方法对男性/女性静脉图像特征分布直方图和静脉图像库进行直方图匹配,计算特征向量相似度,得到身份认证结果,具体如下:
Chi-square distance方法统计男性/女性的特征向量相似度,通过以下公式进行:
Figure PCTCN2021111271-appb-000032
其中,n,m表示静脉图像大小,i'和j'表示当前计算的图像像素点。D表示对于需要进行识别的静脉图像特征向量,F是静脉图像库中计算的静脉特征向量。K为静脉图像库的静脉图像幅数,k'代表当前计算的静脉图像库中的静脉图像特征向量。
Figure PCTCN2021111271-appb-000033
表示特征向量D和特征向量F之间的Chi-square distance。其中D为上述步骤3和步骤4计算的特征编码值A和B统计得到的特征向量。
通过比较
Figure PCTCN2021111271-appb-000034
大小可以判断要识别的静脉图像与静脉图像库中静脉图像的匹配度,从 而确定要进行识别的静脉图像的身份。
实施例1
结合图1,本发明所述的基于多模态静脉图像性别信息异构分离的身份识别方法,步骤如下:
步骤1、对静脉图像库的图像进行有效区域提取,结果如图2所示。在进行无监督稀疏特征学习模型训练之前,将有效区域提取的6000幅手背静脉样本图像进行数据归一化后,得到180*180大小的静脉图像矩阵。再用
Figure PCTCN2021111271-appb-000035
函数生成双层无监督稀疏特征学习模型的输入矩阵。
步骤2、将生成的特征矩阵输入双层无监督稀疏特征学习模型,分别对输入矩阵进行行分布和列分布的l 2正则化处理,使得输入矩阵的分布由
Figure PCTCN2021111271-appb-000036
变换至
Figure PCTCN2021111271-appb-000037
即映射至l 2正则化矩阵单位曲面分布。随后构建目标函数
Figure PCTCN2021111271-appb-000038
Figure PCTCN2021111271-appb-000039
图3为l 2正则化后,输出特征矩阵具有多重稀疏特性的变化过程。再采用L-BFGS最小化方法对目标函数进行优化,直至模型收敛,得到网络优化后的解。之后,将上述第一层网络结构优化求解后的输出矩阵作为输入进行二次训练,得到稀疏特征矩阵。为保证变换结果矩阵的稀疏性,保留了相对较多的值为0的项。再将得到的稀疏特征矩阵输入线性SVM进行分类,得到最终的性别判定结果。
由于生理差异,男性的静脉图像相对于女性在后续的特征提取中表现出更高的质量,于是将男性静脉图像标记为HQ组,女性静脉图像标记为LQ组。经过双层无监督稀疏特征学习模型特征学习后,对不同性别的灰度矩阵分布作用效果如图4所示。
本发明使用高效率双层无监督稀疏特征学习模型对不同质量的男性和女性的静脉图像进行分组,得到主观视觉一致静脉质量分类结果(HQ和LQ两组),再使用改进的最大类间方差二进制特征编码模型进行特征提取,针对高质量静脉图像所提取的特征编码值定义为
Figure PCTCN2021111271-appb-000040
而低质量静脉图像提取的特征编码值定义为
Figure PCTCN2021111271-appb-000041
得到有效的特征编码值后,对向量进行l 1正则化
Figure PCTCN2021111271-appb-000042
进一步提高其判别性,随后将正则化后特征编码值进行级联得到最终的静脉图像
Figure PCTCN2021111271-appb-000043
直方图特征描述子。
步骤3、采用
Figure PCTCN2021111271-appb-000044
对HQ组的男性静脉图像直接进行局部邻域二值编码特征提取,具体编码结果如下式:
Figure PCTCN2021111271-appb-000045
式中m×n表示输入静脉图像尺寸,A i表示
Figure PCTCN2021111271-appb-000046
编码值,w i表示图像编码权值,
Figure PCTCN2021111271-appb-000047
表示对应像素点编码值,i表示像素点编号。图5为基于此编码规则得到HQ组的两幅不同输入静脉图像的特征直方图分布。由图5所示的特征直方图可见,两图像之间具有高度可区分性,充分证明了
Figure PCTCN2021111271-appb-000048
模型提取特征的有效性。
步骤4、对LQ组的特征提取具体为,对LQ组的每个输入图像来说,选用自适应直方图均衡化对其进行对比度增强,对增强后图像进行
Figure PCTCN2021111271-appb-000049
特征提取;与此同时,对于未进行对比度增强静脉图像也进行
Figure PCTCN2021111271-appb-000050
特征提取,将对比度增强处理图像的特征表示的权值和未进行对比度增强处理图像的特征编码组合得到最终的特征表示结果
Figure PCTCN2021111271-appb-000051
具体计算结果如下式:
Figure PCTCN2021111271-appb-000052
式中m×n表示输入静脉图像尺寸,B i表示
Figure PCTCN2021111271-appb-000053
编码值,w iCE表示经过AHE处理后图像编码权值,
Figure PCTCN2021111271-appb-000054
表示对应像素点编码值,i表示像素点编号。
基于这一编码策略得到的LQ组静脉图像直方图分布结果如图6所示,两者直方图分布差异明显增强,充分证明了所提出特征编码策略对于低对比度图像特征表征的有效性。
步骤5~6、在对得到的HQ组合LQ组的像素点特征编码值进行l 1归一化后,将其进行级联得到特征直方图描述子。在静脉图像匹配阶段,提出Chi-square distance方法来计算特征向量相似度,得到最终的匹配结果。
为验证本发明提出的改进的最大类间方差二进制特征编码模型(DLBP)特征提取方法的有效性,将DLBP和LIF模型(SIFT及其改进模型SURF、ASIFT、RootSIFT)在静脉图像库上进行等误率结果对比,如图7(a)、7(b)和7(c)所示。结果表明,在男性静脉图像库,经过对比度预处理的女性静脉图像库和全局静脉图像库中,本发明提出的改进的最大类间方差二进制特征编码模型都取得了较高的等误率,表明此模型在特征提取和生成策略的有效性。

Claims (7)

  1. 一种基于多模态静脉图像性别信息异构分离的身份识别方法,其特征在于,所述方法包括以下步骤:
    步骤1、使用手背静脉采集装置采集K幅人体手背静脉图像,对人体手背静脉图像进行有效区域提取,并进行零-均值归一化,得到像素大小为n×m的K幅的静脉图像,其中男性和女性手背静脉图像各K/2幅,100<K<10000,100<n<10000,100<m<10000,对静脉图像进行二次归一化处理,获取静脉图像库,将静脉图像库作为双层无监督稀疏特征学习模型的输入矩阵
    Figure PCTCN2021111271-appb-100001
    转入步骤2;
    步骤2、构建双层无监督稀疏特征学习模型,将双层无监督稀疏特征学习模型的输入矩阵
    Figure PCTCN2021111271-appb-100002
    输入双层无监督稀疏特征学习模型进行训练,得到稀疏且分布于静脉信息性别判定空间的特征表示,称为手背静脉图像的稀疏特征矩阵,将其输入线性SVM进行分类,得到带有性别标签信息的手背静脉图像,即男性手背静脉图像和女性手背静脉图像,同时转入步骤3和步骤4;
    步骤3、对男性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值和编码权值,得到男性静脉图像特征分布直方图,转到步骤5;
    步骤4、对女性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值,再对女性手背静脉图像进行对比度增强处理,计算最大类间方差二进制编码权值,将女性的特征编码值和编码权值组合,得到女性当前邻域特征编码值,进而得到女性静脉图像特征分布直方图,转入步骤6;
    步骤5、采用Chi-square distance方法对男性静脉图像特征分布直方图和静脉图像库进行直方图匹配,计算特征向量相似度,得到身份认证结果;
    步骤6、采用Chi-square distance方法对女性静脉图像特征分布直方图和静脉图像库进行直方图匹配,计算特征向量相似度,得到身份认证结果。
  2. 根据权利要求1所述的基于多模态静脉图像性别信息异构分离的身份识别方法,其特征在于:步骤1中,使用手背静脉采集装置采集K幅人体手背静脉图像,对人体手背静脉图像进行有效区域提取,并通过零-均值归一化,得到像素大小为n×m的K幅的静脉图像,再通过公式
    Figure PCTCN2021111271-appb-100003
    进行二次归一化处理,获取静脉图像,其中X p表示经过零-均值归一化后的静脉图像,
    Figure PCTCN2021111271-appb-100004
    表示双层无监督稀疏特征学习模型的输入矩阵,ε为常数项,ε=10 8
    Figure PCTCN2021111271-appb-100005
    为是归一化矩阵,j表示性别,T表示归一化的维度。
  3. 根据权利要求1所述的基于多模态静脉图像性别信息异构分离的身份识别方法,其特征在于,步骤2中,构建双层无监督稀疏特征学习模型,将双层无监督稀疏特征学习模型的输入矩阵
    Figure PCTCN2021111271-appb-100006
    输入双层无监督稀疏特征学习模型进行训练,得到稀疏且分布于静脉信息性别判定空间的特征表示,称为手背静脉图像的稀疏特征矩阵,将其输入线性SVM进行分类,得到带有性别标签信息的手背静脉图像,具体如下:
    步骤2-1、将双层无监督稀疏特征学习模型的输入矩阵
    Figure PCTCN2021111271-appb-100007
    输入双层无监督稀疏特征学习模型,转入步骤2-2;
    步骤2-2、双层无监督稀疏特征学习模型对输入矩阵
    Figure PCTCN2021111271-appb-100008
    行分布进行l 2正则化
    Figure PCTCN2021111271-appb-100009
    Figure PCTCN2021111271-appb-100010
    再对输入矩阵
    Figure PCTCN2021111271-appb-100011
    列分布进行l 2正则化
    Figure PCTCN2021111271-appb-100012
    其中f j表示输入矩阵行分布;
    Figure PCTCN2021111271-appb-100013
    表示输入矩阵列分布,转入步骤2-3;
    步骤2-3、基于l 1惩罚项构建模型的目标函数:
    Figure PCTCN2021111271-appb-100014
    转入步骤2-4;
    步骤2-4、基于L-BFGS方法不断最小化目标函数,得到第一层网络结构优化求解后的输出矩阵,转入步骤2-5;
    步骤2-5、将上述第一层网络结构优化求解后的输出矩阵作为输入,返回步骤2-1直到得到第二层网络结构优化求解后的输出矩阵,即为手背静脉图像的稀疏特征矩阵,转入步骤2-6;
    步骤2-6、将手背静脉图像的稀疏特征矩阵输入线性SVM进行分类,得到带有性别标签信息的手背静脉图像,即男性手背静脉图像和女性手背静脉图像。
  4. 根据权利要求1所述的基于多模态静脉图像性别信息异构分离的身份识别方法,其特征在于,步骤3,对男性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值和编码权值,得到男性静脉图像特征分布直方图,最大类间方差二进制特征编码值,具体公式如下:
    Figure PCTCN2021111271-appb-100015
    其中均值μ 0和均值μ 1,如下式:
    Figure PCTCN2021111271-appb-100016
    Figure PCTCN2021111271-appb-100017
    其中r i代表待编码像素点,τ代表待编码像素点灰度值,邻域像素数P=P 0+P 1,P 0代表编码为0的像素数,P 1代表编码为1的像素数;i表示像素点编号;
    其中逻辑判断函数S(x):
    Figure PCTCN2021111271-appb-100018
  5. 根据权利要求1所述的基于多模态静脉图像性别信息异构分离的身份识别方法,其特征在于,步骤3和步骤4中,对男性/女性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差二进制特征编码值,具体如下:
    对手背静脉图像直接最大类间方差二进制特征编码值,获得特征编码值:
    Figure PCTCN2021111271-appb-100019
    其中m×n表示输入手背静脉图像尺寸,A i表示
    Figure PCTCN2021111271-appb-100020
    特征编码值,w i表示图像编码权值,
    Figure PCTCN2021111271-appb-100021
    表示对应像素点特征编码值,i表示图像像素点编号。
  6. 根据权利要求1所述的基于多模态静脉图像性别信息异构分离的身份识别方法,其特征在于,步骤3中,对男性手背静脉图像直接进行特征提取,计算当前邻域的最大类间方差的特征编码权值,具体如下:
    最大类间方差二进制特征编码模型中的编码权值为分类后的像素集类间方差取最大值时的像素值,计算公式如下:
    Figure PCTCN2021111271-appb-100022
    其中τ *为最优编码阈值,
    Figure PCTCN2021111271-appb-100023
    为邻域二值化后类间方差,r i代表待编码像素点,邻域像素数P=P 0+P 1,P 0代表编码为0的像素数,P 1代表编码为1的像素数;
    其中,类间方差
    Figure PCTCN2021111271-appb-100024
    如下式:
    Figure PCTCN2021111271-appb-100025
    实现的有效特征选择的权重求解如下式
    Figure PCTCN2021111271-appb-100026
    其中ω *为最大类间方差二进制特征编码的编码权值,σ 2代表邻域整体方差,
    Figure PCTCN2021111271-appb-100027
    表示邻域二值化后类间方差,C是为了保证优化权重解的数值稳定性添加的常数,设置C=0.01 2;τ代表待编码像素点灰度值。
  7. 根据权利要求1所述的基于多模态静脉图像性别信息异构分离的身份识别方法,其特征在于,步骤4中,所述对女性手背静脉图像进行对比度增强处理,计算最大类间方差二进制特征编码权值,具体如下:
    最大类间方差二进制编码公式如下:
    Figure PCTCN2021111271-appb-100028
    式中m×n表示输入静脉图像尺寸,B i表示
    Figure PCTCN2021111271-appb-100029
    特征编码值,w iCE表示经过AHE处理后图像编码权值,
    Figure PCTCN2021111271-appb-100030
    表示对应像素点编码值,i表示图像像素点编号。
PCT/CN2021/111271 2020-09-14 2021-08-06 基于多模态静脉图像性别信息异构分离的身份识别方法 WO2022052701A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010962618.1 2020-09-14
CN202010962618.1A CN112214746B (zh) 2020-09-14 2020-09-14 基于多模态静脉图像性别信息异构分离的身份识别方法

Publications (1)

Publication Number Publication Date
WO2022052701A1 true WO2022052701A1 (zh) 2022-03-17

Family

ID=74049458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/111271 WO2022052701A1 (zh) 2020-09-14 2021-08-06 基于多模态静脉图像性别信息异构分离的身份识别方法

Country Status (2)

Country Link
CN (1) CN112214746B (zh)
WO (1) WO2022052701A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214746B (zh) * 2020-09-14 2021-07-13 中国矿业大学 基于多模态静脉图像性别信息异构分离的身份识别方法
CN113689344B (zh) * 2021-06-30 2022-05-27 中国矿业大学 基于特征解耦学习的低曝光图像增强方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069400A (zh) * 2015-07-16 2015-11-18 北京工业大学 基于栈式稀疏自编码的人脸图像性别识别系统
CN109190566A (zh) * 2018-09-10 2019-01-11 中国民航大学 一种融合局部编码与cnn模型手指静脉识别方法
CN109409179A (zh) * 2018-03-30 2019-03-01 中国科学院半导体研究所 一种基于邻域类编码学习的静脉识别方法
CN110543822A (zh) * 2019-07-29 2019-12-06 浙江理工大学 一种基于卷积神经网络和监督式离散哈希算法的指静脉识别方法
CN112214746A (zh) * 2020-09-14 2021-01-12 中国矿业大学 基于多模态静脉图像性别信息异构分离的身份识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437074B (zh) * 2017-07-27 2020-02-28 深圳市斑点猫信息技术有限公司 一种身份认证方法和装置
CN108425588A (zh) * 2018-04-13 2018-08-21 爱驰汽车有限公司 基于手掌静脉识别的车门控制设备、方法、系统及介质
CN109034016B (zh) * 2018-07-12 2021-10-15 辽宁工业大学 一种普适性的基于s-cnn模型的手背静脉图像识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069400A (zh) * 2015-07-16 2015-11-18 北京工业大学 基于栈式稀疏自编码的人脸图像性别识别系统
CN109409179A (zh) * 2018-03-30 2019-03-01 中国科学院半导体研究所 一种基于邻域类编码学习的静脉识别方法
CN109190566A (zh) * 2018-09-10 2019-01-11 中国民航大学 一种融合局部编码与cnn模型手指静脉识别方法
CN110543822A (zh) * 2019-07-29 2019-12-06 浙江理工大学 一种基于卷积神经网络和监督式离散哈希算法的指静脉识别方法
CN112214746A (zh) * 2020-09-14 2021-01-12 中国矿业大学 基于多模态静脉图像性别信息异构分离的身份识别方法

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PAN ZAIYU; WANG JUN; SHEN ZHENGWEN; CHEN XIAOLING; LI MING: "Multi-Layer Convolutional Features Concatenation With Semantic Feature Selector for Vein Recognition", IEEE ACCESS, IEEE, USA, vol. 7, 1 January 1900 (1900-01-01), USA , pages 90608 - 90619, XP011735983, DOI: 10.1109/ACCESS.2019.2927230 *
WANG, JUN ET AL.: "Bimodal Vein Data Mining via Cross-Selected-Domain Knowledge Transfer.", IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, vol. 13, no. 3, 31 March 2018 (2018-03-31), XP011675149, DOI: 10.1109/TIFS.2017.2766039 *
WANG, JUN ET AL.: "Quality-Specific Hand Vein Recognition System.", IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, vol. 12, no. 11, 30 November 2017 (2017-11-30), XP011660595, DOI: 10.1109/TIFS.2017.2713340 *

Also Published As

Publication number Publication date
CN112214746B (zh) 2021-07-13
CN112214746A (zh) 2021-01-12

Similar Documents

Publication Publication Date Title
US11809485B2 (en) Method for retrieving footprint images
Choy et al. Fuzzy model-based clustering and its application in image segmentation
WO2022052701A1 (zh) 基于多模态静脉图像性别信息异构分离的身份识别方法
CN111340824B (zh) 一种基于数据挖掘的图像特征分割方法
Zhao et al. Adaptive logit adjustment loss for long-tailed visual recognition
CN111462116A (zh) 基于影像组学特征的多模态参数模型优化融合方法
CN111126240B (zh) 一种三通道特征融合人脸识别方法
CN107463917A (zh) 一种基于改进的ltp与二维双向pca融合的人脸特征提取方法
CN110837768A (zh) 一种面向珍稀动物保护的在线检测与识别方法
CN108280421B (zh) 基于多特征深度运动图的人体行为识别方法
CN111984817B (zh) 一种基于自注意力机制加权的细粒度图像检索方法
CN112507804A (zh) 一种基于手部图像的身份识别方法
CN111127407B (zh) 一种基于傅里叶变换的风格迁移伪造图像检测装置及方法
Artola et al. Glad: A global-to-local anomaly detector
Pathak et al. Entropy based CNN for segmentation of noisy color eye images using color, texture and brightness contour features
Ariana et al. Color image segmentation using kohonen self-organizing map (som)
CN115909401A (zh) 融合深度学习的牛脸识别方法、装置、电子设备和介质
RU2809571C1 (ru) Способ установления личности на основе гетерогенного разделения гендерной информации мультимодальных изображений вен
CN112926592A (zh) 一种基于改进Fast算法的商标检索方法及装置
CN111931757A (zh) 基于mdlbp分块直方图和pca降维的指静脉快速排序方法及装置
Lang et al. A learning-based approach for aluminum tube defect detection using imbalanced dataset
CN111931670A (zh) 基于卷积神经网的深度图像头部检测与定位方法及系统
Luo et al. Image fusion of the feature level based on quantum-behaved particle swarm optimization algorithm
Zhang et al. Glass defect recognition method based on improved convolutional neural networks
CN112241954B (zh) 基于肿块差异化分类的全视野自适应分割网络配置方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865757

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21865757

Country of ref document: EP

Kind code of ref document: A1