CN103020953A - Segmenting method of fingerprint image - Google Patents

Segmenting method of fingerprint image Download PDF

Info

Publication number
CN103020953A
CN103020953A CN2012104407357A CN201210440735A CN103020953A CN 103020953 A CN103020953 A CN 103020953A CN 2012104407357 A CN2012104407357 A CN 2012104407357A CN 201210440735 A CN201210440735 A CN 201210440735A CN 103020953 A CN103020953 A CN 103020953A
Authority
CN
China
Prior art keywords
image
piece
gray
block
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012104407357A
Other languages
Chinese (zh)
Inventor
刘汉英
周剑勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN2012104407357A priority Critical patent/CN103020953A/en
Publication of CN103020953A publication Critical patent/CN103020953A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a segmenting method of a fingerprint image. The segmenting method specifically comprises the following steps: a, reading in the fingerprint image; b, quickly cutting the fingerprint image; c, transforming in reverse colour; d, equalizing; e, carrying out top-hat transformation; f, calculating the characteristic quantity in blockst; g, segmenting in blocks; h, processing the image according to morphology; and i, obtaining the segmented fingerprint image. The segmenting method of the finger image is applicable to the fingerprint image poorer in quality, can correctly segment the fingerprint, and is higher in reliability; the ISODATA (Iterative Self-organizing Data Analysis Techniques Algorithm) clustering algorithm is adopted and used for segmenting in blocks, and the clustering speed is higher; the quick cutting way is carried out, thereby the processing time is reduced; the block segmenting mode and pixel segmenting mode are combined, thus the segmented fingerprint is relatively smooth in contour; and the image equalizing mode and the top-hat transformation mode are adopted to enhance the image, and as a result, the segmenting is more efficient.

Description

一种指纹图像的分割方法A Segmentation Method of Fingerprint Image

技术领域 technical field

本发明涉及一种指纹图像预处理方法,特别涉及一种指纹图像的分割方法。  The invention relates to a fingerprint image preprocessing method, in particular to a fingerprint image segmentation method. the

背景技术 Background technique

指纹图像识别是人类生物特征识别技术中最早应用、价格最低廉的分支,被广泛地应用在刑侦破案、住宅安全,银行、证券、保险等金融机构的身份确认,重要区域的门禁管制,职员或会员管理等领域,具有广阔的应用前景。  Fingerprint image recognition is the earliest application and the cheapest branch of human biometric identification technology. It is widely used in criminal investigation, housing security, identity confirmation of banks, securities, insurance and other financial institutions, access control in important areas, staff or Membership management and other fields have broad application prospects. the

指纹图像分割是指纹识别预处理阶段的重要步骤,主要目的是去除非指纹区和噪声较多不易区分的指纹区域,有效的分割可以减少后续处理的时间,减少伪特征点的提取,提高识别准确率。  Fingerprint image segmentation is an important step in the preprocessing stage of fingerprint recognition. The main purpose is to remove non-fingerprint areas and fingerprint areas that are more noisy and difficult to distinguish. Effective segmentation can reduce the time of subsequent processing, reduce the extraction of false feature points, and improve the accuracy of identification. Rate. the

目前常用的指纹分割方法有:a根据图像灰度特性的分割方法,利用指纹图像灰度平均值和方差对指纹图像进行分割,有全局阈值分割和自适应阈值分割,全局阈值分割依赖于图像灰度分布良好的双峰性质,如果双峰性不明显或灰度呈多峰分布,分割效果就不理想,自适应阈值分割会将对比度低而方向性强的容易恢复的区域分割掉,且分割后的指纹图像存在块效应;b根据指纹图像方向和频率特性的分割方法,这种方法比较复杂,特别是点方向或点频率的计算,对脊线粗细不均匀的区域或中心三角附近区域,难以准确计算;c根据灰度特性和方向特性的分割方法,利用图像的局部标准差(或方差)和一致性特征,采用线性分类法进行分割,这种方法考虑了方向特性和灰度特性,对多种特性进行融合,但其系数的选择非常关键,阈值的设定比较困难;d基于隐马尔可夫模型的分割方法;e基于变换和能量场的分割方法;方法d和方法e考虑了多种因素,但算法计算复杂性较大,对低质量指纹图像处理效率低,不能准确分割。  At present, the commonly used fingerprint segmentation methods are: a. According to the segmentation method of image grayscale characteristics, the fingerprint image is segmented by using the average value and variance of the fingerprint image grayscale. There are global threshold segmentation and adaptive threshold segmentation. The global threshold segmentation depends on the image grayscale. The bimodal nature of the degree distribution is good. If the bimodality is not obvious or the gray level is multimodal, the segmentation effect is not ideal. The adaptive threshold segmentation will segment the easy-to-restore areas with low contrast and strong directionality, and the segmentation There is block effect in the final fingerprint image; b. According to the segmentation method of fingerprint image direction and frequency characteristics, this method is more complicated, especially the calculation of point direction or point frequency. Difficult to calculate accurately; c According to the segmentation method of grayscale characteristics and directional characteristics, using the local standard deviation (or variance) and consistency features of the image, the linear classification method is used for segmentation. This method takes the directional characteristics and grayscale characteristics into account. Fusion of multiple characteristics, but the selection of its coefficients is very critical, and the setting of the threshold is difficult; d segmentation method based on hidden Markov model; e segmentation method based on transformation and energy field; method d and method e consider There are many factors, but the calculation complexity of the algorithm is relatively large, and the processing efficiency of low-quality fingerprint images is low, and it cannot be accurately segmented. the

发明内容 Contents of the invention

本发明的目的是提供一种指纹图像的分割方法,该方法将ISODATA聚类算法(迭代自组织分析算法)和形态学图像处理方法应用到指纹图像分割中,先对指纹图像进行裁剪,如果指纹图像对比度较低,则对图像进行均衡化处理,然后对图像进行顶帽变换,补偿不均匀的背景亮度,最后用ISODATA聚类对图像进行分块分割,形态学图像处理。  The object of the present invention is to provide a kind of segmentation method of fingerprint image, this method applies ISODATA clustering algorithm (iterative self-organization analysis algorithm) and morphological image processing method in the fingerprint image segmentation, first fingerprint image is cut out, if fingerprint If the image contrast is low, equalize the image, then perform top-hat transformation on the image to compensate for the uneven background brightness, and finally use ISODATA clustering to segment the image into blocks and perform morphological image processing. the

具体步骤为:  The specific steps are:

(1)读入指纹图像,根据图像分辨率resolution确定分块大小W,如果resolution>600dpi,则W=30,如果resolution<400 dpi,则W=8,否则W默认等于16,将指纹图像转换为double类图像img。 (1) Read in the fingerprint image, determine the block size W according to the image resolution, if resolution>600dpi, then W=30, if resolution<400 dpi, then W=8, otherwise W is equal to 16 by default, and convert the fingerprint image It is a double image img.

(2)对步骤(1)所得的double类图像img进行快速裁剪:  (2) Quickly crop the double class image img obtained in step (1):

a.从第1行开始,每隔x行,计算第i行的最大灰度值和最小灰度值之差diff(i);其中i=1+kx,k为非负整数,

Figure DEST_PATH_GDA00002777286800021
m为图像的行数,i≤m,i用floor函数取整;  a. Starting from line 1, every x lines, calculate the difference diff(i) between the maximum gray value and the minimum gray value of line i; where i=1+kx, k is a non-negative integer,
Figure DEST_PATH_GDA00002777286800021
m is the number of rows of the image, i≤m, and i is rounded by the floor function;

b.求出所有最大灰度值和最小灰度值之差diff(i)的最大值diffmax;  b. Find the maximum value diffmax of the difference diff(i) between the maximum gray value and the minimum gray value;

c.找出最大灰度值和最小灰度值之差较大的计算行diffd(取最大灰度值和最小灰度值之差大于diffmax/3的行);  c. Find the calculation row diffd with a large difference between the maximum gray value and the minimum gray value (take the row whose difference between the maximum gray value and the minimum gray value is greater than diffmax/3);

d.找到差值较大的开始行mb,从步骤(2)第c步中找出的第一计算行开始,如果连续两计算行的差值均较大,确定开始行是此计算行的前一计算行(不小于1),否则向后寻找;  d. Find the start line mb with a larger difference, starting from the first calculation line found in step (2) step c, if the difference between two consecutive calculation lines is large, determine that the start line is the calculation line The previous calculation line (not less than 1), otherwise look backward;

e.找出差值较大的结束行me,从步骤(2)第c步中找出的最后一计算行开始,如果连续两计算行的差值均较大,则结束行是此计算行的后一计算行(不大于m),否则向前寻找;  e. Find the ending line me with a large difference, starting from the last calculation line found in step (2) c, if the difference between two consecutive calculation lines is large, then the end line is this calculation line The next calculation row (not greater than m), otherwise look forward;

f.按步骤a~e的方法找到差值较大的开始列nb和结束列ne;  f. According to the method of steps a~e, find the start column nb and the end column ne with a large difference;

g.将结束行me和结束列ne进行调整,使裁剪后图像的尺寸为分块大小W的整数倍,对原始图像img进行裁剪,得到裁剪后图像img1,裁剪后图像大小为m1*n1,其中m1=me-mb,n1=ne-nb。  g. Adjust the end row me and the end column ne, so that the size of the cropped image is an integer multiple of the block size W, and crop the original image img to obtain the cropped image img1, the size of the cropped image is m1*n1, Where m1=me-mb, n1=ne-nb. the

(3)反色变换:  (3) Inverse color transformation:

a.计算图像img1四周像素的灰度平均值avgbound,即计算第一行,最后一行,第一列和最后一列所有像素的灰度平均值;  a. Calculate the average gray value avgbound of the pixels around the image img1, that is, calculate the average gray value of all pixels in the first row, the last row, the first column and the last column;

b.如果背景较亮,即avgbound>0.5,则进行反色变换img1=1-img1,否则跳到步骤(4)。  b. If the background is bright, that is, avgbound>0.5, perform inverse color transformation img1=1-img1, otherwise skip to step (4). the

(4)均衡化;  (4) Equalization;

a.计算图像img1的灰度平均值 avg 1 = &Sigma; i = 1 m 1 &Sigma; j = 1 n 1 img 1 ( i , j ) - - - ( 1 ) a. Calculate the average gray value of the image img1 avg 1 = &Sigma; i = 1 m 1 &Sigma; j = 1 no 1 img 1 ( i , j ) - - - ( 1 )

b.计算图像的标准偏差 std 1 = &Sigma; i = 1 m 1 &Sigma; j = 1 n 1 ( imgl ( i , j ) - avg 1 ) 2 m 1 * n 1 - 1 - - - ( 2 ) b. Calculate the standard deviation of the image std 1 = &Sigma; i = 1 m 1 &Sigma; j = 1 no 1 ( imgl ( i , j ) - avg 1 ) 2 m 1 * no 1 - 1 - - - ( 2 )

c.图像直方图均衡化,如果std1<T1(T1取0.11),则进行图像直方图均衡化,否则跳到步骤(5)。  c. Image histogram equalization, if std1<T1 (T1 takes 0.11), perform image histogram equalization, otherwise skip to step (5). the

(5)顶帽变换;  (5) Top hat transformation;

a.设置结构元素是半径为W/2的圆;  a. Set the structural element to be a circle with a radius of W/2;

b.对图像img1进行顶帽变换,得到图像img2。  b. Perform top-hat transformation on image img1 to obtain image img2. the

(6)分块计算特征量;  (6) Calculate the feature quantity in blocks;

a.将图像img2分为W*W大小的互不重叠的小块imgb,计算特征量;  a. Divide the image img2 into non-overlapping small blocks imgb of W*W size, and calculate the feature quantity;

b.计算块灰度均值avgb: avgb = &Sigma; i = 1 w &Sigma; j = 1 w imgb ( i , j ) - - - ( 3 ) b. Calculate the block gray average value avgb: avgb = &Sigma; i = 1 w &Sigma; j = 1 w imgb ( i , j ) - - - ( 3 )

c.计算块标准偏差stdb: stdb = &Sigma; i = 1 w &Sigma; j = 1 w ( imgb ( i , j ) - avgb ) 2 w * w - 1 - - - ( 4 ) c. Calculate block standard deviation stdb: stdb = &Sigma; i = 1 w &Sigma; j = 1 w ( imgb ( i , j ) - avgb ) 2 w * w - 1 - - - ( 4 )

d.计算块灰度对比度zb:

Figure DEST_PATH_GDA00002777286800033
d. Calculate block gray contrast zb:
Figure DEST_PATH_GDA00002777286800033

式中,n1为块中灰度值大于或等于块灰度均值avgb的点数,n2为块中灰度值小于块灰度均值avgb的点数,t1为块中灰度值大于或等于块灰度均值avgb的所有点灰度值之和,t2为块中灰度值小于块灰度均值avgb的所有点灰度值之和;  In the formula, n1 is the number of points in the block whose gray value is greater than or equal to the average gray value of the block, n2 is the number of points in the block whose gray value is smaller than the average gray value of the block, and t1 is the number of points in the block whose gray value is greater than or equal to the gray value of the block The sum of the gray values of all points with the mean value avgb, t2 is the sum of the gray values of all points in the block whose gray value is less than the block gray mean value avgb;

e.计算块方向一致性cohb: cohb = ( G xx - G yy ) 2 + 4 G xy 2 G xx + G yy - - - ( 6 ) e. Calculate block direction consistency cohb: cohb = ( G xx - G yy ) 2 + 4 G xy 2 G xx + G yy - - - ( 6 )

GG xxxx == &Sigma;&Sigma; uu == 11 ww &Sigma;&Sigma; vv == 11 ww &dtri;&dtri; xx (( ii ++ uu ,, jj ++ vv )) 22

GG yyyy == &Sigma;&Sigma; uu == 11 ww &Sigma;&Sigma; vv == 11 ww &dtri;&dtri; ythe y (( ii ++ uu ,, jj ++ vv )) 22

式中, G yy = &Sigma; u = 1 w &Sigma; v = 1 w &dtri; x ( i + u , j + v ) &CenterDot; &dtri; y ( i + u , j + v ) - - - ( 7 ) In the formula, G yy = &Sigma; u = 1 w &Sigma; v = 1 w &dtri; x ( i + u , j + v ) &Center Dot; &dtri; the y ( i + u , j + v ) - - - ( 7 )

&dtri;&dtri; xx == &Sigma;&Sigma; uu == 11 33 &Sigma;&Sigma; vv == 11 33 SS xx (( uu ,, vv )) &CenterDot;&Center Dot; imgbimgb (( ii ++ uu ,, jj ++ vv ))

&dtri;&dtri; ythe y == &Sigma;&Sigma; uu == 11 33 &Sigma;&Sigma; vv == 11 33 SS ythe y (( uu ,, vv )) &CenterDot;&CenterDot; imgbimgb (( ii ++ uu ,, jj ++ vv ))

Sx,Sy为Sobel算子。  S x , S y are Sobel operators.

(7) 分块分割;  (7) block segmentation;

a.分别确定四个特征量上下限up_feature和low_feature; a. Determine the upper and lower limits of the four feature quantities up_feature and low_feature respectively;

up_feature= min{0.9,maxfeature-difffeature} up_feature = min{0.9, maxfeature-difffeature}

low_feature= min{0.1,minfeature+difffeature}                     (8) low_feature = min{0.1, minfeature+difffeature} (8)

式中,difffeature=(maxfeature-minfeature)/10,maxfeature为所有块特征量的最大值,minfeature为所有块特征量的最小值; In the formula, difffeature=(maxfeature-minfeature)/10, maxfeature is the maximum value of all block feature quantities, and minfeature is the minimum value of all block feature quantities;

b.确定初始背景块类和初始背景块类中心。当某个特征量小于其对应下限时为初始背景块。初始背景块的块数大于等于1。初始背景类中心为所有初始背景块特征量的平均值组成的一个四维向量; b. Determine the initial background block class and the center of the initial background block class. When a feature quantity is less than its corresponding lower limit, it is the initial background block. The block number of the initial background block is greater than or equal to 1. The initial background class center is a four-dimensional vector composed of the average value of all initial background block feature quantities;

c.确定初始前景块类和初始前景块类中心。当所有特征量均大于其对应上限时为初始前景块。初始前景块类中心为所有初始前景块特征量的平均值组成的一个四维向量。如果没有找到初始前景块,即初始前景块的块数为0,那么初始前景块类中心为四个特征量上限组成的一个四维向量; c. Determine the initial foreground block class and the center of the initial foreground block class. When all feature quantities are greater than their corresponding upper limit, it is the initial foreground block. The initial foreground block class center is a four-dimensional vector composed of the average value of all initial foreground block feature quantities. If the initial foreground block is not found, that is, the block number of the initial foreground block is 0, then the initial foreground block class center is a four-dimensional vector composed of four feature quantity upper limits;

d.对每一个非初始前景块且非初始背景块的末确定块,分别计算四个特征量到前景块类中心和背景块类中心的四个分量的距离,即欧几里得距离; d. For each non-initial foreground block and the final block of non-initial background block, calculate the distance of four feature quantities to the four components of the foreground block class center and the background block class center respectively, namely the Euclidean distance;

e.如果四个特征量到前景块类中心的距离都大于到背景块类中心的距离的两倍,则该块为背景块;如果四个特征量到背景块类中心的距离都大于到前景块类中心的距离的两倍,则该块为前景块;其它情况为暂时无法确定的块; e. If the distances from the four feature quantities to the center of the foreground block class are greater than twice the distance to the center of the background block class, then the block is a background block; if the distances from the four feature quantities to the center of the background block class are greater than to the foreground Twice the distance from the center of the block class, the block is a foreground block; in other cases, it is a block that cannot be determined temporarily;

f.分别计算前景块类和背景块类的中心。背景块类中心为所有背景块特征量平均值组成的一个四维向量。前景块类中心为所有前景块特征量的平均值组成的一个四维向量。如果没有找到前景块,则前景块类中心为四个特征量上限组成的一个四维向量; f. Calculate the center of the foreground block class and the background block class respectively. The class center of the background block is a four-dimensional vector composed of the average value of all background block features. The foreground block class center is a four-dimensional vector composed of the average value of all foreground block features. If no foreground block is found, the center of the foreground block class is a four-dimensional vector composed of four feature quantity upper limits;

g.将新的聚类中心与旧的聚类中心比较,如果距离小于阈值T2(设为0.01)则停止,否则,以新的聚类中心为基准,返回步骤d。 g. Compare the new cluster center with the old cluster center, if the distance is less than the threshold T2 (set to 0.01), stop, otherwise, take the new cluster center as the benchmark, and return to step d.

(8) 形态学图像处理:  (8) Morphological image processing:

a.对图像img1,计算分块分割出的背景块处的灰度平均值avgback,如果背景较暗,即avgback<0.5,进行反色变换,即y=1-img1, 将背景处灰度值用背景处灰度平均值代替。; a. For the image img1, calculate the average gray level avgback of the background block segmented by the block. If the background is darker, that is, avgback<0.5, perform inverse color transformation, that is, y=1-img1, and change the gray value of the background Replaced by the average gray value of the background. ;

b.处理次数n初始化为1; b. The number of processing times n is initialized to 1;

c.用5*5大小的结构元素对图像y进行腐蚀,增强(平方),用OTSU法(最大类间方差法)找到区域掩码msk,对msk求反,去掉较小的对象; c. Corrode the image y with structural elements of 5*5 size, enhance (square), use the OTSU method (maximum inter-class variance method) to find the area mask msk, negate msk, and remove smaller objects;

d.用7*7大小的结构元素对区域掩码msk进行腐蚀,填充孔洞,处理次数n加1,如果n≤4,返回步骤c,否则执行步骤e; d. Corrode the region mask msk with structural elements of 7*7 size, fill holes, and increase the number of processing n by 1, if n≤4, return to step c, otherwise execute step e;

e.为防止边界分割过度,将分割结果外边界扩大W/2,得到最终的分割结果msk。 e. In order to prevent excessive boundary segmentation, the outer boundary of the segmentation result is enlarged by W/2 to obtain the final segmentation result msk.

(9) 得到分割后的指纹图像:  (9) Get the segmented fingerprint image:

所述步骤(2)的作用是将非指纹区域去除,减少后续处理的时间,采用间隔若干行计算,加快了处理速度。 The function of the step (2) is to remove the non-fingerprint area, reduce the time for subsequent processing, and use intervals of several rows for calculation to speed up the processing speed.

所述步骤(3)只对背景较亮的图像实施,其作用是将指纹图像转变为背景暗,前景亮的图像,以便步骤(5)实施顶帽变换。  The step (3) is only implemented on the image with a brighter background, and its effect is to convert the fingerprint image into an image with a dark background and a bright foreground, so that step (5) implements top-hat transformation. the

所述步骤(4)只对对比度低的图像实施,其作用是使图像的灰度间距拉开,使灰度分布均匀,增大反差,使图像细节清晰。  The step (4) is only implemented for images with low contrast, and its effect is to widen the gray scale interval of the image, make the gray scale distribution uniform, increase the contrast, and make the image details clear. the

所述步骤(5)的作用是补偿不均匀的背景亮度。  The function of the step (5) is to compensate the uneven background brightness. the

所述步骤(6)分块计算块灰度均值、块标准偏差、块灰度对比度和块方向一致性等四个特征量,作为步骤(7)分块分割的依据。  The step (6) calculates four feature quantities such as block gray mean value, block standard deviation, block gray contrast and block direction consistency in blocks, as the basis for step (7) block segmentation. the

所述步骤(7)采用ISODATA聚类算法,根据步骤(6)特征量的值设定初始前景块和初始背景块,以初始前景块和初始背景块的均值为初始聚类中心进行聚类,聚类的结果有三种:前景块、背景块、暂时无法确定的块,聚类时仅对末确定块进行计算、判断,聚类速度较快。  Described step (7) adopts ISODATA clustering algorithm, sets initial foreground block and initial background block according to the value of step (6) feature quantity, carries out clustering with the mean value of initial foreground block and initial background block as initial clustering center, There are three types of clustering results: foreground blocks, background blocks, and temporarily undetermined blocks. During clustering, only undetermined blocks are calculated and judged, and the clustering speed is relatively fast. the

所述步骤(8)使用形态学处理方法按像素分割,对图像进行腐蚀、增强,用最大类间方差法对区域掩码进行二值化,删除小对象和孔洞,为防止边界过度分割,将分割结果外边界扩大W/2,得到最终的分割结果msk,此边界不影响后续处理。  Said step (8) uses the morphological processing method to divide by pixel, corrodes and enhances the image, carries out binarization to the region mask with the method of maximum variance between classes, deletes small objects and holes, and for preventing excessive segmentation of the boundary, the The outer boundary of the segmentation result is expanded by W/2 to obtain the final segmentation result msk, and this boundary does not affect subsequent processing. the

与现有的方法不同,本发明指纹图像的分割方法的具有如下优点:  Different from existing methods, the segmentation method of fingerprint image of the present invention has the following advantages:

(1)本发明对质量较差的的指纹图像亦适用,能正确地进行分割,有较高的可靠性; (1) The present invention is also applicable to fingerprint images with poor quality, can be correctly segmented, and has higher reliability;

(2)本发明采用ISODATA聚类算法,进行分块分割,聚类速度较快;采用快速裁剪,减少了处理时间; (2) The present invention adopts the ISODATA clustering algorithm to carry out block segmentation, and the clustering speed is fast; adopting fast cutting reduces the processing time;

(3)本发明采用块分割和像素分割相结合,分割出的指纹轮廓比较平滑; (3) The present invention adopts the combination of block segmentation and pixel segmentation, and the fingerprint outline that is segmented is relatively smooth;

(4) 本发明采用图像均衡化和顶帽变换对图像进行增强,使分割更加有效。 (4) The present invention uses image equalization and top-hat transformation to enhance the image to make the segmentation more effective.

附图说明 Description of drawings

图1为本发明的指纹分割方法流程图。  Fig. 1 is a flowchart of the fingerprint segmentation method of the present invention. the

图2为本发明实施例的指纹图像。  Fig. 2 is a fingerprint image of an embodiment of the present invention. the

图中:(a)是原始图像FVC2004db3 103_5.GIF;(b)是(a)快速裁剪后的图像;(c)是(b)顶帽变换后图像;(d)是(c)分块分割结果,黑色为背景块;(e)是(b)反色变换并根据(d)将背景处用背景处灰度平均值代替的图像;(f)是(e)进行形态学处理得到的图像掩码;(g)是(f)扩大到原图像尺寸与(a)与运算后的图像。  In the figure: (a) is the original image FVC2004db3 103_5.GIF; (b) is the image after (a) fast cropping; (c) is the image after (b) top-hat transformation; (d) is (c) block segmentation As a result, black is the background block; (e) is the image obtained by (b) inverse color transformation and replacing the background with the average gray value of the background according to (d); (f) is the image obtained by (e) morphological processing Mask; (g) is the image after (f) is enlarged to the original image size and (a) and operated. the

图3为本发明块特征量,是0~1之间的小数,其中最黑为0,最白为1。  Fig. 3 is the feature quantity of the block of the present invention, which is a decimal between 0 and 1, wherein the darkest is 0, and the whitest is 1. the

图中:(a)块灰度均值avgb;(b)块标准偏差stdb;(c)块灰度对比度zb;(d)块方向一致性cohb。  In the figure: (a) block gray average avgb; (b) block standard deviation stdb; (c) block gray contrast zb; (d) block direction consistency cohb. the

具体实施方式 Detailed ways

实施例:Example:

本发明的方法是在Windows XP环境下运行,用Matlab语言实现,它的步骤为: Method of the present invention is to run under Windows XP environment, realizes with Matlab language, and its step is:

(1)使用imread函数读入指纹图像FVC2004db3 103_5.GIF(采集器的类型是热刮擦),使用imfinfo函数获得图像分辨率resolution,确定分块大小W=16,将图像转换为double类图像img,图像大小为480*300,m=480,n=300,如图2(a)。 (1) Use the imread function to read in the fingerprint image FVC2004db3 103_5.GIF (the type of collector is thermal scratch), use the imfinfo function to obtain the image resolution, determine the block size W=16, and convert the image into a double image img , the image size is 480*300, m=480, n=300, as shown in Figure 2(a).

(2)对图像img进行快速裁剪,x=21.9089,分别计算第1、22、44、66、88、110、132、154、176、198、220……461行的最大灰度值和最小灰度值之差diff(i),最大值diffmax=0.3137,mb=44,me=454,nb=18,ne=300,并将尺寸调整为W的整数倍,me=443,ne=289,得到裁剪后图像img1,图像大小为400*272,如图2(b) 。  (2) Quickly crop the image img, x=21.9089, respectively calculate the maximum gray value and minimum gray value of the 1st, 22nd, 44th, 66th, 88th, 110th, 132th, 154th, 176th, 198th, 220...461 lines The difference between degree values diff(i), the maximum value diffmax=0.3137, mb=44, me=454, nb=18, ne=300, and adjust the size to an integer multiple of W, me=443, ne=289, get The cropped image img1 has an image size of 400*272, as shown in Figure 2(b). the

(3)计算图像img1四周像素平均值avgbound=0.4009,avgbound<0.5,跳到步骤(4) 。  (3) Calculate the average value of pixels around image img1 avgbound=0.4009, avgbound<0.5, skip to step (4). the

(4)计算图像img1的标准偏差std1=0.3201,std1>0.11,跳到步骤(5) 。  (4) Calculate the standard deviation std1=0.3201 of the image img1, std1>0.11, skip to step (5). the

(5)使用函数strel创建半径为W/2=8大小的圆的形态学结构元素,使用函数imtophat对图像img1进行顶帽变换,得到图像img2,如图2(c) 。  (5) Use the function strel to create a morphological structure element of a circle with a radius of W/2=8, and use the function imtophat to perform top-hat transformation on the image img1 to obtain the image img2, as shown in Figure 2(c). the

(6)对图像img2分块计算块灰度均值avgb、块标准偏差stdb、块灰度对比度zb、块方向一致性cohb,如图3所示。  (6) Calculate block gray average avgb, block standard deviation stdb, block gray contrast zb, and block direction consistency cohb for image img2, as shown in Figure 3. the

(7)确定块灰度均值上下限0.5018、0.0558、,标准偏差上下限0.4248、0.0472,块灰度对比度上下限0.8380、0.0931,块方向一致性上下限0.8684、0.0965,初始背景块类中心为(0.0343,0.0260,0.0439,0.3199),初始前景块类中心为(0.4483,0.4464,0.8646,0.9166),使用ISODATA聚类算法找出部分背景块,即得到分块分割结果mskback,如图2(d)中黑色块为找出的部分背景块。  (7) Determine the upper and lower limits of the block gray mean value 0.5018, 0.0558, the upper and lower limits of the standard deviation 0.4248, 0.0472, the upper and lower limits of the block gray contrast 0.8380, 0.0931, the upper and lower limits of the block direction consistency 0.8684, 0.0965, the initial background block class center is ( 0.0343, 0.0260, 0.0439, 0.3199), the initial foreground block class center is (0.4483, 0.4464, 0.8646, 0.9166), use the ISODATA clustering algorithm to find some background blocks, and obtain the block segmentation result mskback, as shown in Figure 2(d) The middle black block is part of the background block found. the

(8)对图像img1,根据mskback,计算分块分割出的背景处灰度平均值avgback=0.3090,对图像进行反色变换,并将背景处灰度值用1-avgback代替,得到图像y,如图2(e);使用函数imerode,用5*5大小的结构元素对图像y进行腐蚀,平方,用OTSU法找到区域掩码msk,求反,去掉小对象;用7*7大小的结构元素对区域掩码msk进行腐蚀,用函数imfill填充孔洞,再扩大W/2=8,得到最终的掩码msk,如图2(f) 。  (8) For the image img1, according to mskback, calculate the average value of the background gray level avgback=0.3090 from the block segmentation, perform inverse color transformation on the image, and replace the gray value of the background with 1-avgback to obtain the image y, As shown in Figure 2(e); use the function imerode to corrode and square the image y with a structure element of 5*5 size, use the OTSU method to find the area mask msk, negate it, and remove small objects; use a structure of 7*7 size The element corrodes the area mask msk, fills the holes with the function imfill, and then expands W/2=8 to obtain the final mask msk, as shown in Figure 2(f). the

(9)将掩码扩大到原图像尺寸480*300,输出分割效果,如图2(g),黑色为分割出的背景。  (9) Expand the mask to the original image size of 480*300, and output the segmentation effect, as shown in Figure 2(g), black is the segmented background. the

Claims (1)

1. the dividing method of a fingerprint image is characterized in that concrete steps are:
(1) read in fingerprint image, determine a minute block size W according to image resolution ratio resolution, if resolution 600dpi, if W=30 then is resolution<400dpi, then W=8, otherwise the W acquiescence equals 16, and fingerprint image is converted to double class image img;
(2) the double class image img of step (1) gained carried out Fast trim:
A. since the 1st row, capable every x, calculate the difference diff (i) of the capable maximum gradation value of i and minimum gradation value, i=1+kx wherein, k is nonnegative integer,
Figure DEST_PATH_FDA00002777286700011
M is the line number of image, i≤m, and i rounds with the floor function;
B. obtain the maximal value diffmax of the difference diff (i) of all maximum gradation value and minimum gradation value;
C. find out the larger calculating row diffd of difference of maximum gradation value and minimum gradation value---get the difference of maximum gradation value and minimum gradation value greater than the row of diffmax/3;
D. find the larger begin column mb of difference, from the first calculating row beginning that step (2) c found out the step, if the difference of continuous two calculating row is all larger, determine that begin column is last calculating row of this calculating row, is not less than 1, otherwise seeks backward;
E. find out the larger end line me of difference, last that find out from step (2) c go on foot calculates row to begin, if the difference that continuous two calculating are gone is all larger, then end line is that this rear calculating of calculating row is gone, and is not more than m, otherwise forward searching;
F. the method by step a ~ e finds the larger begin column nb of difference and end column ne;
G. end line me and end column ne are adjusted, make cutting after image be of a size of minute integral multiple of block size W, original image img is carried out cutting, obtain image img1 after the cutting, the image size is m1*n1 after the cutting, m1=me-mb wherein, n1=ne-nb;
(3) inverse conversion:
A. the average gray avgbound of pixel around the computed image img1 namely calculates the first row, last column, and first row is listed as the average gray of all pixels with last;
If b. background is brighter, i.e. avgbound〉0.5, then carry out inverse conversion img1=1-img1, otherwise jump to step (4);
(4) equalization:
A. the average gray of computed image img1
Figure DEST_PATH_FDA00002777286700012
B. the standard deviation of computed image
Figure DEST_PATH_FDA00002777286700013
C. image histogram equalization, if std1<T1, T1 gets 0.11, then carries out the image histogram equalization, otherwise jumps to step (5);
(5) top cap conversion:
A., it is that radius is the circle of W/2 that structural element is set;
B. image img1 is carried out top cap conversion, obtain image img2;
(6) piecemeal calculated characteristics amount:
A. the fritter imgb that image img2 is divided into the non-overlapping copies of W*W size, the calculated characteristics amount;
B. computing block gray average avgb:
Figure DEST_PATH_FDA00002777286700021
C. computing block standard deviation stdb:
Figure DEST_PATH_FDA00002777286700022
D. computing block grey-scale contrast zb:
Figure DEST_PATH_FDA00002777286700023
In the formula, n1 is gray-scale value counting more than or equal to piece gray average avgb in the piece, n2 is gray-scale value counting less than piece gray average avgb in the piece, t1 is the had some gray-scale value sum of gray-scale value in the piece more than or equal to piece gray average avgb, and t2 is the had some gray-scale value sum of gray-scale value in the piece less than piece gray average avgb;
E. computing block direction consistance cohb:
Figure DEST_PATH_FDA00002777286700024
Figure DEST_PATH_FDA00002777286700025
Figure DEST_PATH_FDA00002777286700026
In the formula,
Figure DEST_PATH_FDA00002777286700027
Figure DEST_PATH_FDA00002777286700028
Figure DEST_PATH_FDA00002777286700029
S x, S yBe the Sobel operator;
(7) piecemeal is cut apart:
A. determine respectively four characteristic quantity bound up_feature and low_feature;
up_feature=min{0.9,maxfeature-difffeature}
low_feature=min{0.1,minfeature+difffeature} (8)
In the formula, difffeature=(maxfeature-minfeature)/10, maxfeature are the maximal value of all block feature amounts, and minfeature is the minimum value of all block feature amounts;
B. determine initial background piece class and initial background piece class center, when certain characteristic quantity is the initial background piece less than its corresponding lower prescribing a time limit, the piece number of initial background piece is more than or equal to 1, and initial background class center is the four-dimensional vector that the mean value of all initial background block feature amounts forms;
C. determine initial foreground blocks class and initial foreground blocks class center, when all characteristic quantities all are initial foreground blocks greater than its corresponding upper prescribing a time limit, the four-dimensional vector that the mean value that initial foreground blocks class center is all initial foreground blocks characteristic quantities forms, if do not find initial foreground blocks, the piece number that is initial foreground blocks is 0, and so initial foreground blocks class center is the four-dimensional vector that four characteristic quantity upper limits form;
D. piece is determined at the end of each non-initial foreground blocks and non-initial background piece, calculated respectively four characteristic quantities to the distance of four components at foreground blocks class center and background piece class center, i.e. Euclidean distance;
If e. all greater than the twice to the distance at background piece class center, then this piece is the background piece to the distance at foreground blocks class center for four characteristic quantities; If all greater than the twice to the distance at foreground blocks class center, then this piece is foreground blocks to the distance at background piece class center for four characteristic quantities; The piece of other situation for temporarily determining;
F. calculate respectively the center of foreground blocks class and background piece class, background piece class center is four-dimensional vectorial for that have powerful connections, block feature amount mean value formed, foreground blocks class center is the four-dimensional vector that the mean value of all foreground blocks characteristic quantities forms, if do not find foreground blocks, then foreground blocks class center is the four-dimensional vector that four characteristic quantity upper limits form;
G. with new cluster centre and old cluster centre relatively, if distance less than threshold value T2 then stop, T2 is made as 0.01, otherwise, take new cluster centre as benchmark, return steps d;
(8) morphological images is processed:
A. to image img1, calculate the average gray avgback at the background piece place that piecemeal is partitioned into, if background is darker, namely the inverse conversion is carried out in avgback<0.5, namely y=1-img1 replaces background place gray-scale value with background place average gray;
B. number of processes n is initialized as 1;
C. the structural element with the 5*5 size corrodes image y, strengthens---square, use the OTSU method, namely maximum variance between clusters finds regional mask msk, and msk is negated, and removes less object;
D. the structural element with the 7*7 size corrodes regional mask msk, fills hole, and number of processes n adds 1, if step c is returned in n≤4, otherwise execution in step e;
E. excessive for preventing boundary segmentation, the segmentation result outer boundary is enlarged W/2, obtain final segmentation result msk;
(9) fingerprint image after obtaining cutting apart.
CN2012104407357A 2012-11-07 2012-11-07 Segmenting method of fingerprint image Pending CN103020953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012104407357A CN103020953A (en) 2012-11-07 2012-11-07 Segmenting method of fingerprint image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012104407357A CN103020953A (en) 2012-11-07 2012-11-07 Segmenting method of fingerprint image

Publications (1)

Publication Number Publication Date
CN103020953A true CN103020953A (en) 2013-04-03

Family

ID=47969519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012104407357A Pending CN103020953A (en) 2012-11-07 2012-11-07 Segmenting method of fingerprint image

Country Status (1)

Country Link
CN (1) CN103020953A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952071A (en) * 2015-06-11 2015-09-30 昆明理工大学 Maximum between-cluster variance image segmentation algorithm based on GLSC (gray-level spatial correlation)
CN106803053A (en) * 2015-11-26 2017-06-06 奇景光电股份有限公司 fingerprint image processing method and device
CN107710671A (en) * 2015-04-30 2018-02-16 德山真旭 Terminal installation and computer program
CN107800499A (en) * 2017-11-09 2018-03-13 周小凤 A kind of radio programs broadcast control method
CN111815654A (en) * 2020-07-14 2020-10-23 北京字节跳动网络技术有限公司 Method, apparatus, device and computer readable medium for processing image
CN112215851A (en) * 2020-09-28 2021-01-12 武汉理工大学 Road network automatic construction method, storage medium and system
CN112699863A (en) * 2021-03-25 2021-04-23 深圳阜时科技有限公司 Fingerprint enhancement algorithm, computer-readable storage medium and electronic device
CN114549670A (en) * 2022-02-23 2022-05-27 京东方数字科技有限公司 Image processing method and image processing system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007107050A1 (en) * 2006-03-23 2007-09-27 Zksoftware Beijing Inc. Fingerprint identification method and system
CN101526994A (en) * 2009-04-03 2009-09-09 山东大学 Fingerprint image segmentation method irrelevant to collecting device
CN102208021A (en) * 2011-07-21 2011-10-05 中国人民解放军国防科学技术大学 Fingerprint image splitting method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007107050A1 (en) * 2006-03-23 2007-09-27 Zksoftware Beijing Inc. Fingerprint identification method and system
CN101526994A (en) * 2009-04-03 2009-09-09 山东大学 Fingerprint image segmentation method irrelevant to collecting device
CN102208021A (en) * 2011-07-21 2011-10-05 中国人民解放军国防科学技术大学 Fingerprint image splitting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁媛媛等: "一种指纹图像的快速分割方法", 《计算机工程》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710671A (en) * 2015-04-30 2018-02-16 德山真旭 Terminal installation and computer program
CN107710671B (en) * 2015-04-30 2020-06-12 德山真旭 Terminal device and computer-readable storage medium
CN104952071A (en) * 2015-06-11 2015-09-30 昆明理工大学 Maximum between-cluster variance image segmentation algorithm based on GLSC (gray-level spatial correlation)
CN106803053A (en) * 2015-11-26 2017-06-06 奇景光电股份有限公司 fingerprint image processing method and device
CN106803053B (en) * 2015-11-26 2019-10-11 奇景光电股份有限公司 Fingerprint image processing method and device
CN107800499A (en) * 2017-11-09 2018-03-13 周小凤 A kind of radio programs broadcast control method
CN111815654A (en) * 2020-07-14 2020-10-23 北京字节跳动网络技术有限公司 Method, apparatus, device and computer readable medium for processing image
CN112215851A (en) * 2020-09-28 2021-01-12 武汉理工大学 Road network automatic construction method, storage medium and system
CN112215851B (en) * 2020-09-28 2022-06-21 武汉理工大学 Road network automatic construction method, storage medium and system
CN112699863A (en) * 2021-03-25 2021-04-23 深圳阜时科技有限公司 Fingerprint enhancement algorithm, computer-readable storage medium and electronic device
CN114549670A (en) * 2022-02-23 2022-05-27 京东方数字科技有限公司 Image processing method and image processing system
CN114549670B (en) * 2022-02-23 2023-04-07 京东方数字科技有限公司 Image processing method and image processing system

Similar Documents

Publication Publication Date Title
CN103020953A (en) Segmenting method of fingerprint image
CN110415208B (en) An adaptive target detection method and its device, equipment and storage medium
CN109684922B (en) A multi-model recognition method for finished dishes based on convolutional neural network
CN104537673B (en) Infrared Image Segmentation based on multi thresholds and adaptive fuzzy clustering
CN107038416B (en) A Pedestrian Detection Method Based on Improved HOG Feature of Binary Image
CN102663400B (en) LBP (length between perpendiculars) characteristic extraction method combined with preprocessing
CN105335716A (en) Improved UDN joint-feature extraction-based pedestrian detection method
CN105809121A (en) Multi-characteristic synergic traffic sign detection and identification method
CN106169081A (en) A kind of image classification based on different illumination and processing method
CN104732215A (en) Remote-sensing image coastline extracting method based on information vector machine
CN105335966A (en) Multi-scale remote-sensing image segmentation method based on local homogeneity index
CN104134219A (en) Color image segmentation algorithm based on histograms
CN104616308A (en) Multiscale level set image segmenting method based on kernel fuzzy clustering
CN106529532A (en) License plate identification system based on integral feature channels and gray projection
CN102254326A (en) Image segmentation method by using nucleus transmission
CN106991686A (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN105719275A (en) Parallel combination image defect segmentation method
CN108664969B (en) A Conditional Random Field Based Road Sign Recognition Method
Ye et al. Hyperspectral image classification using principal components-based smooth ordering and multiple 1-D interpolation
CN115147746B (en) Saline-alkali geological identification method based on unmanned aerial vehicle remote sensing image
CN109784216B (en) Vehicle-mounted thermal imaging pedestrian detection Rois extraction method based on probability map
CN101447027B (en) Binaryzation method of magnetic code character area and application thereof
US7620246B2 (en) Method and apparatus for image processing
CN111242891B (en) Rail surface defect identification and classification method
CN105205485B (en) Large scale image partitioning algorithm based on maximum variance algorithm between multiclass class

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130403