WO2018050123A1 - 虹膜图像的检测方法和装置 - Google Patents

虹膜图像的检测方法和装置 Download PDF

Info

Publication number
WO2018050123A1
WO2018050123A1 PCT/CN2017/102265 CN2017102265W WO2018050123A1 WO 2018050123 A1 WO2018050123 A1 WO 2018050123A1 CN 2017102265 W CN2017102265 W CN 2017102265W WO 2018050123 A1 WO2018050123 A1 WO 2018050123A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image blocks
feature
blocks
average
Prior art date
Application number
PCT/CN2017/102265
Other languages
English (en)
French (fr)
Inventor
初育娜
王琪
张祥德
Original Assignee
北京眼神科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京眼神科技有限公司 filed Critical 北京眼神科技有限公司
Publication of WO2018050123A1 publication Critical patent/WO2018050123A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • the present application relates to the field of image detection, and in particular to a method and apparatus for detecting an iris image.
  • Iris recognition is a highly secure iris recognition technology. Iris recognition has a very broad application prospect. Iris image acquisition is the most important basic step in iris recognition. The quality of the acquired iris image directly affects the performance of the iris recognition system. In all low-quality iris images, blurring is a very serious problem. Iris image blurring directly leads to the emergence of false or false errors in the iris recognition process.
  • the fuzzy detection of single-frame iris image is a non-reference image fuzzy evaluation problem, which is more difficult.
  • Most of the existing methods are based on global images or based on local iris regions only, and it is difficult to obtain more accurate results. For example, when the iris image is detected by the global image analysis method, it is easily affected by noise such as glasses, eyelashes and flares.
  • the existing local iris region analysis method also has some defects, because each person's iris texture is not the same, some Human natural iris texture is less, and the existing local iris region analysis method only uses the features extracted by the iris region image, and it is easy to ignore the clear iris image of this group of people.
  • the present application provides a method and apparatus for detecting an iris image to at least solve the fuzzy detection method of the iris image in the prior art, and the technical problem of low detection accuracy.
  • a method for detecting an iris image includes: acquiring an iris image to be detected; determining an iris region image and a pupil edge region image from the iris image; and performing spatial region on the iris region image Feature extraction, get the first feature set, and the pupil edge
  • the edge region image is extracted from the frequency domain to obtain a second feature set; the first feature set and the second feature set are subjected to feature screening, and the selected feature set is detected to obtain a detection result, wherein the detection result is used for characterization Whether the iris image is clear.
  • an apparatus for detecting an iris image comprising: an acquisition module for acquiring an iris image to be detected; and a determining module for determining an iris region image from the iris image and An image of the edge region of the pupil; an extraction module for extracting spatial domain features of the iris region image, obtaining a first feature set, and performing frequency domain feature extraction on the image of the pupil edge region to obtain a second feature set; and a monitoring module for The first feature set and the second feature set perform feature screening, and the selected feature set is detected to obtain a detection result, wherein the detection result is used to represent whether the iris image is clear.
  • an electronic device includes a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through a communication bus;
  • a memory for storing a computer program
  • the training method of the face model according to any one of the embodiments of the present application is implemented when the processor is configured to execute a computer program stored in the memory.
  • a computer program for a method of detecting an iris image according to any one of the embodiments of the present application.
  • a storage medium for storing a computer program, the computer program being executed to perform an iris image according to any one of the embodiments of the present application Detection method.
  • an iris image to be detected may be acquired, and an iris region image and a pupil edge region image are determined from the iris image, and a spatial domain feature is extracted from the iris region image to obtain a first feature set and the pupil edge is obtained.
  • the region image is subjected to frequency domain feature extraction to obtain a second feature set, and the first feature set and the second feature set are detected to obtain a detection result, thereby implementing fuzzy detection of the iris image.
  • the feature set representation is more comprehensive, and the detection accuracy is improved, further , in extracting the first feature set and the second feature After the collection, the feature selection of the two feature sets not only shortens the feature set, but also avoids the redundancy of the feature information and improves the accuracy, thereby solving the fuzzy detection method of the iris image in the prior art.
  • the multi-region multi-indicator method can be used to improve the performance and robustness of the system, so that the system can quickly and conveniently collect the high-quality iris image.
  • FIG. 1 is a flow chart of a method for detecting an iris image according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of pupil positioning in an implementation manner according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an image of an iris to be determined region in an implementation manner according to an embodiment of the present application
  • FIG. 4a is a schematic diagram of an image of a left iris region in an implementation manner, in accordance with an embodiment of the present application
  • FIG. 4b is a schematic diagram of an image of a right iris region in an implementation in accordance with an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an image of a region of a pupil edge to be determined in an implementation manner according to an embodiment of the present application
  • FIG. 6a is a schematic diagram of an image of a left pupil edge region in an implementation manner according to an embodiment of the present application.
  • FIG. 6b is a schematic diagram of an image of a right pupil edge region in an implementation manner according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an image of an iris region in an implementation manner in accordance with an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an MSCN coefficient of an iris region image in an implementation manner, in accordance with an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a generalized Gaussian part fit of an MSCN coefficient of an iris region image in an implementation manner, in accordance with an embodiment of the present application.
  • FIG. 10 is a schematic diagram of collecting and segmenting an image of a pupil edge region in an implementation manner according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of DCT feature extraction of an image of a pupil edge region in an implementation manner in accordance with an embodiment of the present application
  • FIG. 12 is a schematic diagram of an apparatus for detecting an iris image according to an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • a method embodiment of a method for detecting an iris image is provided.
  • the steps shown in the flowchart of the drawing may be in a set of computer executable instructions, for example. The execution is performed in a computer system, and although the logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flowchart of a method for detecting an iris image according to an embodiment of the present application. As shown in FIG. 1 , the method includes the following steps:
  • Step S102 acquiring an iris image to be detected.
  • the iris image described above may include a pupil, an iris, a sclera, an eyelid, an eyelash, that is, an image of a human eye region.
  • a grayscale iris image to be detected may be acquired.
  • the iris image to be detected may be a grayscale image, which may be referred to as a grayscale iris image in the embodiment of the present application.
  • the iris images described in the embodiments of the present application may all be grayscale iris images.
  • Step S104 from the iris image, the iris area image and the pupil edge area image are determined.
  • the iris region image may be an image of an iris region in an iris image
  • the pupil edge region image may be an image of a pupil edge region in the iris image, that is, an image of an inner edge region of the iris, which may include an iris.
  • the sharp edge of the image is the area most susceptible to blur. In the iris image, the most obvious sharp edge is the edge of the pupil, and the area is not susceptible to noise. Therefore, in the ideal environment, the pupil edge is the most favorable for judging. Iris image is blurred or not.
  • the image information contained in the region of the pupil edge in the iris image is: image information that is most favorable for judging whether the iris image is blurred or not.
  • the vicinity of the pupil edge may be selected from the iris image as a Region Of Interest (ROI), in order to make the pupil edge inconspicuous image It can also be judged that the iris region can also be selected as another region of interest to obtain an iris region image and a pupil edge region image.
  • ROI Region Of Interest
  • Step S106 performing spatial domain feature extraction on the iris region image to obtain a first feature set, and performing frequency domain feature extraction on the pupil edge region image to obtain a second feature set.
  • multiple feature extraction methods may be used to extract multiple types of two ROIs.
  • Features for example, a spatial domain feature of the iris region and a frequency domain feature of the pupil edge region may be extracted to obtain a feature set for evaluating the degree of blur of the iris image, that is, the first feature set and the second feature set described above.
  • the two ROIs described above may include a pupil edge region and an iris region.
  • Step S108 performing feature screening on the first feature set and the second feature set, and detecting the selected feature set to obtain a detection result, wherein the detection result is used to represent whether the iris image is clear.
  • feature extraction may be performed on the extracted first feature set and the second feature set to obtain a final feature set. And detecting according to the final feature set, detecting whether the collected iris image is clear, thereby obtaining the detection result.
  • an iris image to be detected can be obtained, and an iris region image and a pupil edge region image are determined from the iris image, and a spatial domain feature is extracted from the iris region image to obtain a first feature set and the pupil edge is obtained.
  • the region image is subjected to frequency domain feature extraction to obtain a second feature set, and the first feature set and the second feature set are detected to obtain a detection result, thereby implementing fuzzy detection of the iris image.
  • the feature set representation is more comprehensive, and the detection accuracy is improved, further After extracting the first feature set and the second feature set, feature filtering of the two feature sets not only shortens the feature set, but also avoids feature information redundancy, improves accuracy, and solves the present problem.
  • feature filtering of the two feature sets not only shortens the feature set, but also avoids feature information redundancy, improves accuracy, and solves the present problem.
  • determining an iris region image and a pupil edge region image from the iris image may include:
  • step S1042 the iris image is positioned to obtain the radius of the pupil and the coordinates of the center of the circle.
  • the iris image may be coarsely positioned using a radial symmetric transformation to obtain a pupil radius and Center coordinates, as shown in Figure 2.
  • Step S1044 according to the radius, the center coordinates and the first preset range, obtain the first image of the to-be-determined area and the image of the second area to be determined, and obtain the image of the third area to be determined and the fourth according to the radius, the center of the circle and the second predetermined range.
  • the image of the area to be determined wherein the first to-be-determined area image and the second to-be-determined area image are located in the iris area, and the third to-be-determined area image and the fourth to-be-determined area image are located in the pupil edge area.
  • the first to-be-determined area image and the second to-be-determined area image may be the image of the iris area in the gray-scale iris image
  • the third to-be-determined area image and the fourth to-be-determined area image may be the image of the pupil edge area in the gray-scale iris image.
  • the first preset range may be a preset iris area range
  • the second preset range may be a preset pupil edge area range, where the first preset range is
  • the second preset range may be set according to actual needs, or may be set by selecting a range with the best detection effect through multiple experiments.
  • the first preset range may be two below the pupil level.
  • the laterally symmetrical 60*55 sub-region, the second predetermined range may be a 20*40 sub-region symmetrically on both sides of the pupil horizontal direction, but the second predetermined range may also be the pupil horizontal below
  • the asymmetric 60*55 sub-region on both sides, the second preset range described above may also be an asymmetrical 20*40 sub-region on both sides of the pupil horizontal direction.
  • a 60*55 sub-region symmetrically on both sides of the pupil horizontally downward direction may be selected as the to-be-determined region, as shown by the two boxes in FIG.
  • a symmetrical 20*40 sub-region on both sides of the pupil can be selected as the undetermined region, as shown by the two boxes in FIG. 5, and the third image of the pending region and the image of the fourth region to be determined are obtained, as shown in FIG. 6a and FIG. 6b. .
  • Step S1046 Obtain an area image that satisfies the first preset condition from the first to-be-determined area image and the second to-be-determined area image, obtain an iris area image, and obtain the second image from the third to-be-determined area image and the fourth to-be-determined area image.
  • the area image of the preset condition is obtained as an image of the pupil edge area.
  • the first to-be-determined region image, the second to-be-determined region image, the third to-be-determined region image, and the fourth to-be-determined region image are selected, Screening the four pending area images, and filtering the area image satisfying the screening condition from the first pending area image and the second pending area image as the iris area image, and An area image satisfying the screening condition is filtered from the third to-be-determined area image and the fourth to-be-determined area image as a pupil edge area image.
  • step S1046 acquiring an area image that satisfies the first preset condition from the first image of the area to be determined and the image of the second area to be determined, and obtaining an image of the iris area may include:
  • Step S112 determining whether the first to-be-determined area image and the second to-be-determined area image contain noise.
  • the area image with less spot and eyelash noise can be filtered out as an iris area image by using a threshold, and the first pending determination can be determined using the following formula. Whether the area image and the second pending area image contain noise:
  • I un (i, j) is the pixel point of the input image of the to-be-determined area, that is, the pixel of the first to-be-determined area image or the second to-be-determined area image
  • M and N are the height and width of the image of the to-be-determined area
  • T min is Pupil boundary and lash noise gray threshold
  • T max spot noise gray threshold less than T min can determine the pupil area and lash noise in the image of the pending area, greater than T max can determine the image of the spot area containing the spot noise.
  • the pixel value of the pixel in the image of the image to be determined is less than T min, it can be determined that the pixel contains the pupil boundary and the eyelash noise, and when the pixel value of the pixel in the image of the image to be determined is greater than T max, it can be determined that the pixel contains Pupil boundary and eyelash noise.
  • Obtaining h 1 0, that is, the pixel values of all the pixels in the image of the pending area are not between T min and T max , then it is determined that the image of the pending area contains noise.
  • Step S114 if both the first to-be-determined area image and the second to-be-determined area image contain noise, or both the first to-be-determined area image and the second to-be-determined area image do not contain noise, the first to-be-determined area image and the second to-be-determined area image are taken as Iris area image.
  • Step S116 if the first to-be-determined area image contains noise, and the second to-be-determined area image does not contain noise, the first to-be-determined area image is replaced with the second to-be-determined area image.
  • the process of replacing the first to-be-determined area image with the second to-be-determined area image may be: replacing the first to-be-determined area image with the second to-be-determined area by using the gray value of the pixel point in the second to-be-determined area image The gray value of the pixel at the same pixel position in the image.
  • Step S118 if the first image of the area to be determined does not contain noise, and the image of the second area to be determined contains noise, the image of the second area to be determined is replaced with the image of the first area to be determined.
  • the process of replacing the second image of the area to be determined as the image of the first area to be determined may be: replacing the first area to be determined in the image of the second area to be determined using the gray value of the pixel in the image of the first area to be determined.
  • the gray value of the pixel at the same pixel position in the image.
  • the iris area image contains two area images, that is, the left iris area image ROI1 and the right iris area image ROI2.
  • the first to-be-determined area image and the second to-be-determined area image may be respectively compared with the first preset noise grayscale threshold range to determine whether the noise is included, and the screening meets the first pre-preparation.
  • the conditional image of the undetermined area is used as the image of the iris area to reduce the influence of noise in the iris image, thereby improving the detection accuracy of the iris image.
  • the first preset noise gray threshold range may be T min to T max described above.
  • step S1046 an area image that satisfies the second preset condition is obtained from the third to-be-determined area image and the fourth to-be-determined area image, and the pupil edge area image is obtained, which may include :
  • Step S122 determining whether the image of the third pending area contains spot noise.
  • the pupil edge area image can be determined by determining whether the third pending area image contains spot noise.
  • the area with less spot noise can be filtered by the threshold as the pupil edge area, and the following formula is used to determine whether the to-be-determined area contains the spot noise:
  • I un (i, j) is the pixel of the input image of the to-be-determined area, that is, the image of the third to-be-determined area or the image of the fourth to-be-determined area
  • M′ and N′ are the height and width of the image of the to-be-determined area
  • T max spot noise Threshold is the pixel of the input image of the to-be-determined area
  • Step S124 if the third pending area image contains spot noise, the fourth pending area image is taken as the pupil edge area image.
  • the fourth pending area image can be used as the pupil edge area image.
  • Step S126 if the third pending area image does not contain spot noise, the third pending area image is taken as the pupil edge area image.
  • h 2 1 of the third image of the pending area is calculated by the formula, it is determined that the third image of the area to be determined does not contain spot noise, and thus the image of the third area of the area can be used as the image of the pupil edge area.
  • the pupil edge region image contains only one region image, that is, the pupil edge region image ROI3.
  • the third pending area image and the fourth to-be-determined area image are compared with the second preset noise gray level threshold range to determine whether the spot noise is included, and the screening meets the second pre-
  • the conditional image of the undetermined area is used as the image of the pupil edge area to reduce the influence of noise in the iris image, thereby improving the detection accuracy of the iris image.
  • the second preset noise gray threshold range may be 0 to T max described above.
  • step S106 spatial domain feature extraction is performed on the iris region image to obtain a first feature set, including:
  • Step S132 calculating a de-average contrast normalization coefficient of the iris region image, and fitting the de-average contrast normalization coefficient by using a generalized Gaussian distribution to obtain a feature vector.
  • Generalized Gaussian Distribution (GGD) has a wide distribution range and can be used to capture the large difference in the tail response of the empirical probability curve of the MSCN coefficient. The definition of a generalized Gaussian distribution is:
  • is the shape parameter
  • the scale parameter ⁇ is a gamma function
  • the gamma function is defined as:
  • the probability-density curve of the MSCN coefficient can be fitted by the zero-mean generalized Gaussian distribution parameter model. As shown in FIG.
  • the above objective function may be "
  • Step S134 calculating a differential signal matrix in the horizontal direction and the vertical direction of the iris region image, and performing block processing on the differential signal matrix in the horizontal direction and the vertical direction to obtain a sub-feature set, wherein the sub-feature set includes at least: a difference The overall activity of the signal, the partial block activity and the number of low-intensity signals.
  • the differential signal matrix in the horizontal direction and the vertical direction of the iris region image may be calculated, and the differential signals in the horizontal direction and the vertical direction of the iris region image may be subjected to block processing to obtain a difference of the iris region image.
  • the differential signal matrix in the horizontal and vertical directions of the iris region can be calculated by the following formula:
  • Step S136 obtaining a first feature set according to the feature vector and the sub-feature set.
  • the spatial domain feature set of the iris region may be obtained according to the feature vector ⁇ , the overall activity of the differential signal, the local partial block activity, and the number of low-intensity signals.
  • the accuracy of the detection can be improved by using the number of low-intensity signals as the third feature of the differential signal.
  • step S134 performing a block processing on the differential signal matrix in the horizontal direction and the vertical direction to obtain a sub-feature set may include:
  • Step S1342 Perform block processing on the differential signal matrix in the horizontal direction and the vertical direction according to the horizontal preset pixel and the vertical preset pixel, respectively, to obtain a plurality of partitions.
  • the horizontal preset pixel and the vertical preset pixel may be separated by 3 pixels.
  • the differential signals in the horizontal direction may be subjected to blocking processing according to the horizontal 3 pixels interval
  • the differential signals in the vertical direction are subjected to blocking processing according to the interval of the vertical 3 pixels.
  • Step S1344 calculating a block boundary average gradient of each block, obtaining an overall activity in the horizontal direction and an overall activity in the vertical direction, and calculating an average value of the overall activity in the horizontal direction and the overall activity in the vertical direction, The overall activity of the differential signal is obtained.
  • the block boundary average gradient Bk of the iris region image can be calculated by the following formula as the overall activity in the horizontal direction and the overall activity in the vertical direction:
  • the average activity obtained in both directions can be averaged to obtain the characteristic component.
  • Step S1346 extracting the absolute value of the intra-block average difference of each block, obtaining the local partial block activity in the horizontal direction and the local partial block activity in the vertical direction, and calculating the local partial block activity and vertical in the horizontal direction.
  • the average of the partial block activity of the direction is obtained by the partial block activity.
  • the absolute value A k of the intra-block average difference can be extracted by the following formula as the local partial block activity in the horizontal direction and the local partial block activity in the vertical direction:
  • the partial partial block activity obtained in both directions can be averaged to obtain the characteristic component. As part of the bureau block activity.
  • Step S1348 Obtaining, from the difference signal matrix in the horizontal direction and the vertical direction, the number of the differential signal is less than the preset value, and obtaining the number of low-intensity signals in the horizontal direction and the number of low-intensity signals in the vertical direction, and calculating The number of low-intensity signals in the horizontal direction and the average of the number of low-intensity signals in the vertical direction obtain the number of low-intensity signals.
  • the preset value may be 2.
  • the step S1348 may be: obtaining, from the difference signal matrix in the horizontal direction and the vertical direction, the number of differential signals that are smaller than the preset value in the horizontal direction, as the number of low-intensity signals in the horizontal direction; and from the horizontal In the differential signal matrix of the direction and the vertical direction, the number of differential signals smaller than the preset value in the vertical direction is obtained as the number of low-intensity signals in the vertical direction.
  • the number of differential signals in the horizontal and vertical directions less than 2 can be calculated by the following formula, as the number of low-intensity signals in the horizontal direction and the number of low-intensity signals in the vertical direction: among them: The average number of low-intensity signals obtained in both directions can be averaged to obtain the characteristic components. As the number of low-intensity signals.
  • Spat ROI1 ( ⁇ R1 , A R1 , B R1 , Z *R1 ) is the spatial domain feature extracted from the image of the left iris region;
  • Spat ROI2 ( ⁇ R2 , A R2 , B R2 , Z *R2 ) is the spatial domain feature extracted from the image of the right iris region;
  • Spat ROI1 and Spat ROI2 are the first feature set of the iris region image described above.
  • step S106 the frequency domain feature extraction is performed on the image of the pupil edge region to obtain the second feature set, which may include:
  • Step S142 performing down-sampling on the image of the pupil edge region twice to obtain a sample image and a sub-sample image.
  • two downsamplings are performed to obtain a sampled image and a resampled image, respectively.
  • two downsamplings may be performed, wherein after the first downsampling, the sampled image may be obtained, and after the sampled image is further downsampled, , you can get a subsampled image.
  • Step S144 segmenting the pupil edge region image, the sample image and the resampled image, respectively, to obtain a plurality of first image blocks, a plurality of second image blocks, and a plurality of third image blocks.
  • the pupil edge region image, the sampled image, and the subsampled image may be segmented to obtain a plurality of image blocks for each image.
  • the step S144 may include: segmenting the image of the pupil edge region to obtain a plurality of first image blocks; and dividing the sample image to obtain a plurality of second image blocks, and dividing the subsampled image Block, get multiple third image blocks.
  • Step S146 performing discrete cosine transform on each of the first image blocks, each of the second image blocks and each of the third image blocks, respectively, to obtain a plurality of processed first image blocks, and the processed plurality of second image blocks. And a plurality of processed third image blocks.
  • the step S146 may include: performing discrete cosine transform on each of the first image blocks to obtain a plurality of processed first image blocks; and performing discrete cosine transform on each of the second image blocks respectively, and obtaining the processed a plurality of second image blocks; and discrete cosine transforms are respectively performed on each of the third image blocks to obtain a plurality of processed third image blocks.
  • DCT Discrete Cosine Transformation
  • the sampled image can be 5*5 divided into blocks, and a 5*5 second image block sample is obtained, and the 5*5 second image block sample is subjected to DCT processing. , you can get a 5 * 5 DCT feature matrix.
  • Step S148 performing feature extraction on the processed plurality of first image blocks, the processed plurality of second image blocks, and the processed plurality of third image blocks, respectively, to obtain a second feature set of the pupil edge region image, and sampling a second feature set of the image and a second feature set of the subsampled image, wherein the second feature
  • the collection includes at least: shape parameters, frequency domain directional characteristics, and frequency domain energy characteristics.
  • feature extraction may be performed on each image block to obtain a frequency domain feature of the pupil edge image, a frequency domain feature of the sampled image, and a frequency domain feature of the subsampled image, that is, the second feature set described above.
  • the step S148 may include: performing feature extraction on the processed plurality of first image blocks to obtain a second feature set of the pupil edge region image; and performing feature extraction on the processed plurality of second image blocks respectively. Obtaining a second feature set of the sampled image; and performing feature extraction on the processed plurality of third image blocks respectively to obtain a second feature set of the subsampled image.
  • step S142 the image of the pupil edge region is down-sampled twice to obtain a sample image and a sub-sample image, which may include:
  • Step S1422 Filter the pupil edge region image by using the first low pass filter, and downsample the filtered pupil edge region image to obtain a sample image.
  • Step S1424 filtering the sampled image by using the second low-pass filter, and down-sampling the filtered sampled image to obtain a resampled image.
  • the first low pass filter and the second low pass filter are set as needed, and the two low pass filters may be the same.
  • the 20*40 pupil edge region image may be downsampled by the first low pass filter to obtain a 10*20 sample image, and then passed through the second low pass filter. After filtering, the device performs downsampling to obtain a 5*10 resampled image.
  • step S144 the pupil edge region image, the sample image, and the subsampled image are respectively segmented to obtain a plurality of first image blocks and a plurality of second image blocks.
  • a plurality of third image blocks which may include:
  • Step S1442 Perform block processing on the image of the pupil edge region according to the first preset segment size to obtain a plurality of first image blocks.
  • the foregoing first preset block size may be a 9*9 block size.
  • Step S1444 Perform block processing on the sampled image according to the second preset block size to obtain a plurality of second image blocks.
  • the foregoing second preset block size may be a 5*5 block size.
  • Step S1446, performing block processing on the subsampled image according to the third preset block size. To multiple third image blocks.
  • the foregoing third preset block size may be a 3*3 block size.
  • the pupil edge region image may be segmented by 9*9 pixels to obtain 15 first image blocks, wherein there are 8 first images having a size of 9*9. Block; using 5*5 pixels to segment the sampled image to obtain 8 second images of 5*5 blocks; using 3*3 pixels to segment the subsampled image to obtain 8 third image blocks, wherein 3 3*3 third image blocks.
  • the pupil edge region image can be segmented by 9*9 pixels, and 15 first image blocks can be obtained, wherein there are 8 first image blocks of size 9*9.
  • 8 second image blocks of size 5*5 can be obtained; and the subsampled image is segmented by 3*3 pixels to obtain 8 third images.
  • the first image block having the size of 9*9 may be referred to as the 9*9 first image block
  • the second image block having the size of 5*5 may be referred to as the 5*5 second image.
  • the third image block having the size of 3*3 may be referred to as a 3*3 third image block.
  • step S148 is performed on the processed plurality of first image blocks, the processed plurality of second image blocks, and the processed plurality of third image blocks, respectively.
  • Feature extraction, obtaining a second feature set of the pupil edge region image, and the second feature set of the sampled image and the second feature set of the subsampled image may include:
  • Step S1482 performing feature extraction on the plurality of first image blocks, the plurality of second image blocks, and the plurality of third image blocks, respectively, obtaining a shape parameter of the pupil edge region image, a shape parameter of the sampled image, and a subsampled image. Shape parameters.
  • the step S1482 may include: separately performing feature extraction on the plurality of first image blocks to obtain a shape parameter of the pupil edge region image; and performing feature extraction on the plurality of second image blocks respectively to obtain a shape parameter of the sample image; And feature extraction is performed on each of the third image blocks to obtain a shape parameter of the subsampled image.
  • the DCT feature matrix of each 9*9 first image block may be processed to obtain feature vectors of each 9*9 first image block, and all 9*9 first image blocks are extracted to The feature vectors are summarized to obtain the shape parameters of the image of the pupil edge region; for each 5*5 second image Processing the DCT feature matrix of the block to obtain a feature vector of each 5*5 second image block, and summing all the feature vectors extracted by the 5*5 second image block to obtain a shape parameter of the sample image; The DCT feature matrix of the 3*3 third image block is processed to obtain the feature vector of each 3*3 third image block, and the feature vectors extracted by all 3*3 third image blocks are summarized to obtain a resampled image. Shape parameters.
  • Step S1484 respectively, each first image block, each second image block and each third image block are divided into a plurality of regions along a main diagonal direction to obtain a plurality of partitions of each first image block, and each Multiple partitions of the second image block, and multiple partitions of each third image block.
  • the step S1484 may include: dividing each first image block into a plurality of regions along a main diagonal direction thereof, respectively, to obtain a plurality of partitions of each first image block, that is, obtaining each of the divided blocks. a first image block; and each of the second image blocks is divided into a plurality of regions along a main diagonal direction thereof, respectively, to obtain a plurality of partitions of each second image block, that is, each second image obtained after the partitioning And each of the third image blocks is divided into a plurality of regions along a main diagonal direction thereof to obtain a plurality of partitions of each of the third image blocks, that is, each of the third image blocks after the partitioning is obtained.
  • the DCT feature matrix of each 9*9 first image block and the DCT feature matrix of each 5*5 second image block may be performed according to a preset segmentation manner.
  • the DCT feature matrix of each 3*3 third image block is separately partitioned into directions, and is divided into multiple direction partitions. As shown in FIG. 10, partitioning may be performed at 30 degrees, 60 degrees, and 90 degrees along the direction of the main diagonal, and may be divided into three directions, that is, the first direction partition in FIG. 10, and the second direction partition and Partition in the third direction.
  • Step S1486 performing feature extraction on each of the first image blocks after partitioning, each second image block after partitioning, and each third image block after partitioning, to obtain frequency domain direction features of the pupil edge region image, and sampling The frequency domain directional characteristics of the image, as well as the frequency domain directional characteristics of the subsampled image.
  • the foregoing step S1486 may include: performing feature extraction on each of the partitioned first image blocks to obtain a frequency domain direction feature of the pupil edge region image; and performing feature extraction on each of the partitioned second image blocks respectively. Obtaining a frequency domain direction feature of the sampled image; and performing feature extraction on each of the third image blocks after the partitioning, respectively, to obtain a frequency domain direction feature of the subsampled image.
  • the variance of the partition is obtained, and the frequency domain direction feature of the sampled image is obtained; the probability density of the partition of each 3*3 third image block can be extracted, and the variance of the partitions of all 3*3 third image blocks can be calculated, and the variance is obtained twice.
  • the frequency domain directional characteristics of the sampled image is obtained, and the frequency domain direction feature of the sampled image is obtained; the probability density of the partition of each 3*3 third image block can be extracted, and the variance of the partitions of all 3*3 third image blocks can be calculated, and the variance is obtained twice.
  • Step S1488 performing feature extraction on the plurality of first image blocks, the plurality of second image blocks, and the plurality of third image blocks respectively, obtaining frequency domain energy features of the pupil edge region image, frequency domain energy features of the sampled image, and Frequency domain energy characteristics of the subsampled image.
  • the step S1488 may include: performing feature extraction on the plurality of first image blocks respectively to obtain frequency domain energy features of the pupil edge region image; and performing feature extraction on the plurality of second image blocks respectively to obtain a frequency of the sample image.
  • energy features may be extracted for each of the 9*9 first image block DCT feature matrices, and the energy features of each 9*9 first image block are obtained, and all 9*9 first image blocks are extracted.
  • the energy characteristics obtained, the frequency domain energy characteristics of the image of the pupil edge region are obtained; the energy features are extracted for the DCT feature matrix of each 5*5 second image block, and the energy feature vector of each 5*5 second image block is obtained,
  • the energy feature vectors extracted by all 5*5 second image blocks are aggregated to obtain the frequency domain energy features of the acquired image; the energy features are extracted for each 3*3 third image block DCT feature matrix, and each 3*3 is obtained.
  • the energy feature vector of the third image block summarizes the energy feature vectors extracted by all 3*3 third image blocks to obtain frequency domain energy features of the subsampled image.
  • step S1482 feature extraction is performed on a plurality of first image blocks, a plurality of second image blocks, and a plurality of third image blocks, respectively, to obtain an image of the pupil edge region.
  • the shape parameter, the shape parameter of the sampled image, and the shape parameter of the resampled image may include:
  • Step S150 using a generalized Gaussian parameter model, respectively fitting each first image block, each second image block and each third image block to obtain a first feature of each first image block, each of the first features. a first feature of the two image blocks, and a first feature of each of the third image blocks, wherein the first feature Including: the first parameter and the second parameter.
  • each first image block, each second image block and each third image block are respectively fitted to obtain a first feature of each first image block, and each of the first features
  • the first feature of the two image blocks, and the first feature of each of the third image blocks may include: fitting each of the first image blocks using a generalized Gaussian parameter model to obtain each of the first image blocks a first feature; and using a generalized Gaussian parameter model, fitting each second image block to obtain a first feature of each second image block; and using a generalized Gaussian parameter model to perform a simulation for each third image block And, the first feature of each third image block is obtained.
  • the generalized Gaussian parameter model may be the generalized Gaussian distribution described above, the first parameter may be a shape parameter ⁇ of a generalized Gaussian distribution, and the second parameter may be a probability density ⁇ of a generalized Gaussian distribution.
  • Step S152 respectively calculating a first feature of the plurality of first image blocks, a first feature of the plurality of second image blocks, and an average of the first features of the plurality of third image blocks, to obtain a plurality of first image blocks. a first average, a first average of the plurality of second image blocks, and a first average of the plurality of third image blocks.
  • the step S152 may include: calculating an average value of the first features of the plurality of first image blocks as a first average value of the plurality of first image blocks; and calculating the first features of the plurality of second image blocks. An average value as a first average value of the plurality of second image blocks; and an average value of the first features of the plurality of third image blocks is calculated as a first average value of the plurality of third image blocks.
  • the first average value described above includes an average of the first parameter and an average of the second parameter.
  • Step S154 respectively, sorting the first parameter of the plurality of first image blocks, the first parameter of the plurality of second image blocks, and the first parameter of the plurality of third image blocks in ascending order, and respectively, and the plurality of first image blocks
  • the second parameter, the second parameter of the plurality of second image blocks, and the second parameter of the plurality of third image blocks are sorted in descending order.
  • the foregoing step S154 may include: performing ascending ordering on the first parameters of the plurality of first image blocks, ascending the first parameters of the plurality of second image blocks, and performing first order on the plurality of third image blocks. Performing ascending ordering; and, performing descending ordering on the second parameters of the plurality of first image blocks, performing descending ordering on the second parameters of the plurality of second image blocks, and second parameters of the plurality of third image blocks Sort in descending order.
  • Step S156 respectively calculating a first feature of the first image block of the top pre-sorted number, sorting the first feature of the first preset number of second image blocks, and multiple pre-sorted presets An average of the first features of the third image block results in a second average of the plurality of first image blocks, a second average of the plurality of second image blocks, and a second average of the plurality of third image blocks.
  • the foregoing step S156 may include: performing an ascending ordering queue based on the first parameters of the plurality of first image blocks, and calculating an average value of the first parameters of the first plurality of preset first image blocks in the queue.
  • A and performing a descending ordering of the second parameters of the plurality of first image blocks, and calculating an average value B of the second parameters of the first plurality of first image blocks in the queue,
  • the average values A and B are the second average values of the plurality of first image blocks;
  • the second parameter of the block is used to perform a descending sorting of the queue, and the average value D of the second parameter of the preset number of second image blocks in the queue is calculated, and the two average values C and D are used as the plurality of a second average of the two image blocks;
  • the preset number may be the first 10% of the number of all sorted image blocks.
  • the second average value described above includes an average value of the first parameter of the preset number of the first order, and an average value of the second parameter of the preset number of the first order.
  • Step S158 respectively, according to the first average value of the plurality of first image blocks and the second average value of the plurality of first image blocks, the first average value of the plurality of second image blocks and the second average of the plurality of second image blocks respectively An average value, and a first average of the plurality of third image blocks and a second average of the plurality of third image blocks, the shape parameters of the pupil edge region image, the shape parameters of the sampled image, and the shape of the subsampled image are obtained Parameter.
  • the step S158 may include: respectively, according to the first average of the plurality of first image blocks a value and a second average of the plurality of first image blocks to obtain a shape parameter of the pupil edge region image; respectively, according to the first average of the plurality of second image blocks and the second average of the plurality of second image blocks, respectively A shape parameter of the sampled image; a shape parameter of the subsampled image is obtained according to a first average of the plurality of third image blocks and a second average of the plurality of third image blocks, respectively.
  • each of the image sub-blocks may be fitted using the generalized Gaussian parameter model described above to obtain a first feature ( ⁇ i,j , ⁇ i,j ) including the first parameter ⁇ i,j and two parameters ⁇ i, j, wherein M 2 and N 2 are the height and width of the pupil edge region image or its downsampled image (ie, the sampled image or the subsampled image), and m is its block side length.
  • the average value of the first feature ⁇ i,j for all sub-blocks that is , the average value of the first parameter described above
  • the average value of the second feature ⁇ i,j of all sub-blocks that is , the average value of the second parameter described above, to obtain a first average value
  • the average of the first feature ⁇ i,j of the first 10% of the sub-blocks after the ascending order of the first feature ⁇ i,j of all sub-blocks Obtaining the average of the second feature ⁇ i,j of the first 10% of the sub-blocks after the descending order of the second feature ⁇ i,j of all the sub-blocks
  • the arrow indicates the ascending order
  • the arrow indicates the descending order, thereby obtaining the shape parameter vector.
  • step S1486 is performed on each of the partitioned first image blocks, each of the partitioned second image blocks, and each of the third image blocks after the partitioning.
  • Feature extraction, obtaining the frequency domain direction feature of the pupil edge region image, the frequency domain direction feature of the sampled image, and the frequency domain direction feature of the subsampled image may include:
  • Step S171 using a generalized Gaussian distribution, fitting each partition of each first image block, each partition of each second image block, and each partition of each third image block to obtain each The probability density of each partition of an image block, the probability density of each partition of each second image block, and the probability density of each partition of each third image block.
  • the step S171 may include: fitting each partition of each first image block by using a generalized Gaussian distribution to obtain a probability density of each partition of each first image block; and using a generalized Gaussian distribution, Fitting each partition of each second image block to obtain a probability density of each partition of each second image block; and fitting each partition of each third image block using a generalized Gaussian distribution Get the probability density of each partition of each third image block.
  • Step S172 respectively calculating a probability density of each of the plurality of partitions of each first image block, each second a probability density of a plurality of partitions of the image block, and a variance of probability densities of the plurality of partitions of each third image block, resulting in a second feature of each first image block, a second feature of each second image block, And a second feature of each third image block.
  • the foregoing step S172 may include: calculating, respectively, a variance of probability densities of the plurality of partitions of each first image block as a second feature of each first image block; and calculating each of the second image blocks separately The variance of the probability density of the partitions as the second feature of each second image block; and separately calculating the variance of the probability density of the plurality of partitions of each third image block as the second feature of each third image block .
  • Step S173 respectively calculating a second feature of the plurality of first image blocks, a second feature of the plurality of second image blocks, and an average of the second features of the plurality of third image blocks, to obtain a plurality of first image blocks.
  • the foregoing step S173 may include: respectively calculating an average value of the second features of the plurality of first image blocks as a third average value of the plurality of first image blocks; and calculating second of the plurality of second image blocks respectively.
  • An average of the features as a third average of the plurality of second image blocks; and an average of the second features of the plurality of third image blocks as a third average of the plurality of third image blocks, respectively.
  • Step S174 sorting the second feature of the plurality of first image blocks, the second feature of the plurality of second image blocks, and the second feature of the plurality of third image blocks in descending order.
  • the step S174 may include: performing a descending ordering of the second features of the plurality of first image blocks; and performing a descending ordering on the second features of the plurality of second image blocks; and The second feature is sorted in descending order.
  • Step S175 respectively calculating a second feature of the first image block of the top sorted preset number, sorting the second feature of the first preset number of second image blocks, and sorting the third of the preset number of presets
  • An average of the second features of the image block results in a fourth average of the plurality of first image blocks, a fourth average of the plurality of second image blocks, and a fourth average of the plurality of third image blocks.
  • the step S175 may include: calculating an average value of the second feature of the first image block of the top pre-ordered number as the fourth average value of the plurality of first image blocks; and calculating the pre-ordered preset An average of the second features of the second image block as a fourth average of the plurality of second image blocks; and calculating an average of the second features of the third image block of the preset number of the first order, As a fourth average of the plurality of third image blocks.
  • the preset number may be the first 10% of the number of all sorted image blocks.
  • Step S176 according to a third average of the plurality of first image blocks and a fourth average of the plurality of first image blocks, a third average of the plurality of second image blocks and a fourth of the plurality of second image blocks, respectively An average value, and a third average of the plurality of third image blocks and a fourth average of the plurality of third image blocks, obtain a frequency domain direction feature of the pupil edge region image, a frequency domain direction feature of the sampled image, and a second time The frequency domain directional characteristics of the sampled image.
  • the step S176 may include: obtaining a frequency domain direction feature of the pupil edge region image according to the third average value of the plurality of first image blocks and the fourth average value of the plurality of first image blocks; a third average of the two image blocks and a fourth average of the plurality of second image blocks to obtain a frequency domain direction feature of the sampled image; and based on the third average of the plurality of third image blocks and the plurality of third image blocks The fourth average value gives the frequency domain directional characteristics of the subsampled image.
  • a generalized Gaussian model fitting is performed on the three portions to obtain ⁇ i,j,1 , ⁇ i,j,2 , ⁇ i , j, 3 , that is, the probability density of each partition described above , and the variance of ⁇ i,j,1 , ⁇ i,j,2 , ⁇ i,j,3 is obtained as ⁇ i,j 2 , that is, the second feature.
  • step S1488 feature extraction is performed on a plurality of first image blocks, a plurality of second image blocks, and a plurality of third image blocks, to obtain a frequency of the pupil edge region image.
  • the domain energy characteristics, the frequency domain energy characteristics of the sampled image, and the frequency domain energy characteristics of the subsampled image may include:
  • Step S181 performing energy extraction on each of the first image blocks, each of the second image blocks and each of the third image blocks in an anti-angular direction, respectively, to obtain multiple energy of each first image block, and each second Multiple energies of the image block, as well as multiple energies for each third image block.
  • the step S181 may include: performing energy extraction on each first image block to obtain multiple energy of each first image block; and, for each second image block, performing energy extraction to obtain each second Multiple energy of the image block; and energy is applied in an anti-angular direction for each third image block Extracting, multiple energies of each third image block are obtained.
  • the partition may be performed at 30 degrees, 60 degrees, and 90 degrees along the direction of the anti-angle, and may be divided into three energy partitions, that is, the first energy partition in FIG. a second energy partition and a third energy partition, extracting low frequency, intermediate frequency, high frequency energy E i,j,1 , E i,j,2 , E i,j,3 , wherein
  • Step S182 respectively calculating a plurality of energies of each first image block, a plurality of energies of each second image block, and a difference of a plurality of energies of each third image block, to obtain a first image block of each A plurality of energy differences, a plurality of energy differences for each second image block, and a plurality of energy differences for each partition of each third image block.
  • the step S182 may include: separately calculating a difference value of the plurality of energy of each first image block to obtain a plurality of energy differences of each first image block; and calculating a plurality of the second image blocks respectively. a difference in energy, a plurality of energy differences of each second image block are obtained; and a difference of a plurality of energies of each partition of each third image block is separately calculated to obtain a plurality of energy of each third image block difference.
  • Step S183 respectively calculating a plurality of energy differences of each first image block, a plurality of energy differences of each second image block, and an average of a plurality of energy differences of each third image block, obtaining each first The energy characteristics of the image block, the energy characteristics of each second image block, and the energy characteristics of each third image block.
  • the foregoing step S183 may include: respectively calculating an average value of the plurality of energy differences of each first image block as an energy feature of each first image block; respectively calculating a plurality of energy differences of each second image block The average value is taken as the energy characteristic of each of the second image blocks; the average of the plurality of energy differences of each of the third image blocks is separately calculated as the energy characteristics of each of the third image blocks.
  • Step S184 respectively calculating an energy feature of the plurality of first image blocks, an energy feature of the plurality of second image blocks, and an average of the energy features of the plurality of third image blocks, to obtain a fifth average of the plurality of first image blocks. a value, a fifth average of the plurality of second image blocks, and a fifth average of the plurality of third image blocks.
  • the step S184 may include: respectively calculating an average value of the energy features of the plurality of first image blocks as a fifth average value of the plurality of first image blocks; respectively calculating an average of the energy features of the plurality of second image blocks The value is a fifth average value of the plurality of second image blocks; and an average value of the energy features of the plurality of third image blocks is respectively calculated as a fifth average value of the plurality of third image blocks.
  • Step S185 sorting the energy characteristics of each first image block, the energy characteristics of each second image block, and the energy characteristics of each third image block.
  • the foregoing step S185 may include: sorting energy features of each first image block; sorting energy features of each second image block; and sorting energy features of each third image block.
  • the above sorting manner may be sorted in ascending order.
  • Step S186 respectively calculating the energy features of the first image block with the highest number of presets, sorting the energy features of the second image block with the highest preset number, and the third plurality of the preset number
  • An average of the energy characteristics of the image block results in a sixth average of the plurality of first image blocks, a sixth average of the plurality of second image blocks, and a sixth average of the plurality of third image blocks.
  • the step S186 may include: calculating an average value of the energy features of the first image block with the highest number of presets as the sixth average value of the plurality of first image blocks; and calculating the preset number with the highest ranking An average of the energy features of the second image block as a sixth average of the plurality of second image blocks; calculating an average of the energy characteristics of the plurality of third image blocks of the preset number of the highest order, as a plurality The sixth average of the third image block.
  • the preset number may be the first 10% of the number of all sorted image blocks.
  • Step S187 according to a fifth average of the plurality of first image blocks and a sixth average of the plurality of first image blocks, a fifth average of the plurality of second image blocks and a sixth of the plurality of second image blocks, respectively
  • the average value, and the fifth average of the plurality of third image blocks and the sixth average of the plurality of third image blocks obtain frequency domain energy characteristics of the pupil edge region image, frequency domain energy characteristics of the sampled image, and quadratic The frequency domain energy characteristics of the sampled image.
  • the step S186 may include: obtaining a frequency domain energy feature of the pupil edge region image according to the fifth average value of the plurality of first image blocks and the sixth average value of the plurality of first image blocks; a fifth average of the image block and a sixth average of the plurality of second image blocks to obtain a frequency domain energy feature of the sampled image; according to a fifth average of the plurality of third image blocks and a plurality of third image blocks Six average values give the frequency domain energy characteristics of the subsampled image.
  • the image of the pupil edge region can be extracted to the following frequency domain feature, that is, the second feature set described above:
  • a frequency domain feature for the first downsampling of the pupil edge region that is, the second feature set of the sampled image described above;
  • the frequency domain feature extracted for the second downsampled image of the pupil edge region that is, the second feature set of the above-mentioned subsampled image.
  • step S108 performing feature screening on the first feature set and the second feature set may include:
  • Step S1082 Filtering the first feature set and the second feature set by using compression estimation to obtain a feature set of the iris image.
  • the compression estimate may be Lasso, Least Absolute Shrinkage and Selection Operator.
  • Lasso is a compression estimation method whose basic idea is to estimate the regression coefficient that minimizes the sum of squared residuals under the constraint that the sum of the absolute values of the regression coefficients is less than a constant. According to some of the generated regression coefficients that are strictly equal to 0, the features are selected to achieve the purpose of dimensionality reduction.
  • the 32-dimensional feature set obtained by the Lasso pair that is, the first feature set and the second feature set described above may be used for feature selection.
  • the actual feature selection results will vary depending on the sample.
  • the following characteristics are finally selected:
  • Spat ROI1 ' (B R1 , Z *R1 ) is the eigenvector of the left iris region
  • Spat ROI2 ' ( ⁇ R2 , B R2 , Z *R2 ) is the feature vector of the right iris region, ie the iris region described above a first feature set of the image
  • the feature set LFSF (Spat ROI1 ',Spat ROI2 ',Freq ROI3 ',Freq down1 ',Freq down1 ') is finally formed from the 32-dimensional index set.
  • step S108 detecting the first feature set and the second feature set, and obtaining the detection result, may include:
  • step S1084 the feature set of the iris image is classified by using a preset classifier, and the classification result of the iris image to be detected is obtained.
  • the preset classifier may be a SVM (Support Vector Machine) classifier, an AdaBoost classifier, or a joint Bayesian classifier, and the like, which can classify features.
  • SVM Serial Vector Machine
  • AdaBoost AdaBoost
  • joint Bayesian classifier and the like, which can classify features.
  • C-SVC linear kernel function
  • step S1086 the detection result is obtained according to the classification result of the iris image to be detected.
  • the SVM classifier of the linear kernel function can be used to classify the database samples, and the final feature set obtained by the Lasso feature selection is used as an input sample of the SVM, and the recognition problem is a two-class problem, and the image is clear. (+1) and blurred image (-1). Finally, the appropriate penalty factor is selected for training, and the trained SVM classifier is obtained.
  • the image is determined to be 0-1 by the trained SVM classifier, and the image determined to be 0 is directly filtered, that is, the blurred image; the image determined as 1 is a clear image.
  • an apparatus embodiment of an apparatus for detecting an iris image is provided.
  • FIG. 12 is a schematic diagram of an apparatus for detecting an iris image according to an embodiment of the present application. As shown in FIG. 12, the apparatus includes:
  • the obtaining module 121 is configured to acquire an iris image to be detected.
  • the iris may include a pupil, an iris, a sclera, an eyelid, and an eyelash, that is, the iris image may be an image of a human eye region.
  • a grayscale iris image to be detected may be acquired.
  • the iris image to be detected may be a grayscale image, which may be referred to as a grayscale iris image in the embodiment of the present application.
  • a determining module 123 is configured to determine an iris region image and a pupil edge region image from the iris image, wherein the iris region image can be used to characterize the iris, and the pupil edge region image can be used to characterize the edge of the iris.
  • the iris region image may be an image of an iris region in an iris image
  • the pupil edge region image may be an image of a pupil edge region in the iris image, that is, an image of an inner edge region of the iris, which may include an image of the iris region and The image of the pupil area.
  • the sharp edge of the image is the area most susceptible to blur. In the iris image, the most obvious sharp edge is the edge of the pupil, and the area is not susceptible to noise. Therefore, in the ideal environment, the pupil edge is the most favorable for judging. Iris image is blurred or not.
  • the image information contained in the region of the pupil edge in the iris image is: image information that is most favorable for judging whether the iris image is blurred or not.
  • the vicinity of the pupil edge may be selected from the iris image as a Region Of Interest (ROI), in order to make the pupil edge inconspicuous image It can also be judged that the iris region can also be selected as another region of interest to obtain an iris region image and a pupil edge region image.
  • ROI Region Of Interest
  • the extraction module 125 is configured to perform spatial domain feature extraction on the iris region image to obtain a first feature set, and perform frequency domain feature extraction on the pupil edge region image to obtain a second feature set.
  • multiple feature extraction methods may be used to extract multiple features for two ROIs, for example, spatial domain features of the iris region and frequency domain features of the pupil edge region may be extracted for evaluation of iris image blur.
  • the feature set of the degree that is, the first feature set and the second feature set described above.
  • the detecting module 127 is configured to detect the first feature set and the second feature set to obtain a detection result, wherein the detection result is used to represent whether the iris image is clear.
  • feature extraction may be performed on the extracted first feature set and the second feature set. Go to the final feature set and test according to the final feature set to detect whether the collected iris image is clear, and obtain the detection result.
  • an iris image to be detected can be obtained, and an iris region image and a pupil edge region image are determined from the iris image, and a spatial domain feature is extracted from the iris region image to obtain a first feature set and the pupil edge is obtained.
  • the region image is subjected to frequency domain feature extraction to obtain a second feature set, and the first feature set and the second feature set are detected to obtain a detection result, thereby implementing fuzzy detection of the iris image.
  • the feature set representation is more comprehensive, and the detection accuracy is improved, further After extracting the first feature set and the second feature set, feature filtering of the two feature sets not only shortens the feature set, but also avoids feature information redundancy, improves accuracy, and solves the present problem.
  • feature filtering of the two feature sets not only shortens the feature set, but also avoids feature information redundancy, improves accuracy, and solves the present problem.
  • the determining module 123 may include:
  • a positioning module for positioning the iris image to obtain a radius and a center coordinate of the pupil
  • a first processing module configured to obtain a first image of the to-be-determined area and an image of the second area to be determined according to the radius, the center of the circle, and the first predetermined range, and obtain the third to-be-determined area according to the radius, the center of the circle, and the second predetermined range.
  • an image of the fourth to-be-determined area wherein the first to-be-determined area image and the second to-be-determined area image are located in an iris area, and the third to-be-determined area image and the fourth to-be-determined area image are located in a pupil edge area;
  • a second processing module configured to acquire an area image that satisfies the first preset condition from the first to-be-determined area image and the second to-be-determined area image, to obtain an iris area image, and from the third to-be-determined area image and the fourth to-be-determined area image Obtaining an area image that satisfies the second preset condition, and obtaining an image of the pupil edge area.
  • the first processing module may include:
  • a first determining sub-module configured to determine whether the first to-be-determined area image and the second to-be-determined area image contain noise
  • a first processing submodule configured to include both the first pending area image and the second pending area image If there is noise, or both the first to-be-determined area image and the second to-be-determined area image are free of noise, the first to-be-determined area image and the second to-be-determined area image are taken as the iris area image;
  • a second processing submodule configured to replace the first to-be-determined area image with the second to-be-determined area image if the first to-be-determined area image contains noise, and the second to-be-determined area image does not contain noise;
  • a third processing submodule configured to replace the second to-be-determined area image with the first to-be-determined area image if the first to-be-determined area image does not contain noise, and the second to-be-determined area image contains noise.
  • the second processing module may include:
  • a second determining sub-module configured to determine whether the image of the third pending area contains spot noise
  • a fourth processing sub-module configured to use the fourth to-be-determined area image as the pupil edge area image if the third pending area image contains spot noise
  • a fifth processing submodule configured to use the third to-be-determined area image as the pupil edge area image if the third pending area image does not contain spot noise.
  • the extracting module 125 may include:
  • a first calculating module configured to calculate a de-average contrast normalization coefficient of the iris region image, and using a generalized Gaussian distribution to fit the de-average contrast normalization coefficient to obtain a feature vector;
  • a second calculation module configured to calculate a differential signal matrix in a horizontal direction and a vertical direction of the iris region image, and perform block processing on the differential signal matrix in the horizontal direction and the vertical direction to obtain a sub-feature set, wherein the sub-feature set At least: the overall activity of the differential signal, the partial block activity and the number of low-intensity signals;
  • the third processing module is configured to obtain the first feature set according to the feature vector and the sub feature set.
  • the second calculating module may include:
  • a sixth processing sub-module configured to perform block processing on the differential signal matrix in the horizontal direction and the vertical direction according to the horizontal preset pixel and the vertical preset pixel, to obtain a plurality of partitions
  • the first calculation sub-module is configured to calculate the block boundary average gradient of each block, obtain the overall activity in the horizontal direction and the overall activity in the vertical direction, and calculate the overall activity in the horizontal direction and the overall activity in the vertical direction.
  • the average value of the degree, the overall activity of the differential signal is obtained;
  • a first extraction sub-module configured to extract an absolute value of the intra-block average difference of each block, obtain a local partial block activity in the horizontal direction and a partial partial block activity in the vertical direction, and calculate a partial partial block in the horizontal direction
  • the average of the partial block activity of the activity and the vertical direction, and the partial block is active. degree
  • the extracting module 125 may include:
  • a sampling module configured to perform down-sampling on the image of the pupil edge region twice to obtain a sample image and a sub-sample image
  • a blocking module configured to respectively segment the pupil edge region image, the sample image, and the subsampled image to obtain a plurality of first image blocks, a plurality of second image blocks, and a plurality of third image blocks;
  • a conversion module configured to perform discrete cosine transform on each of the first image blocks, each of the second image blocks, and each of the third image blocks, to obtain a plurality of processed first image blocks, and the processed plurality of second blocks An image block and a plurality of processed third image blocks;
  • a fourth processing module configured to perform feature extraction on the processed plurality of first image blocks, the processed plurality of second image blocks, and the processed plurality of third image blocks, respectively, to obtain a second image of the pupil edge region image
  • the sampling module may include:
  • a first sampling sub-module configured to filter the image of the pupil edge region by using the first low-pass filter, and down-sample the filtered image of the pupil edge region to obtain a sample image
  • the second sampling submodule is configured to filter the sampled image by using the second low pass filter, and downsample the filtered sample image to obtain a subsampled image.
  • the blocking module may include:
  • a first block sub-module configured to perform block processing on the image of the pupil edge region according to the first preset block size, to obtain a plurality of first image blocks
  • a second block sub-module configured to perform block processing on the sampled image according to the second preset block size, to obtain a plurality of second image blocks
  • the third block sub-module is configured to perform block processing on the sub-sampled image according to the third preset block size to obtain a plurality of third image blocks.
  • the fourth processing module may include:
  • a second extraction sub-module configured to perform feature extraction on the plurality of first image blocks, the plurality of second image blocks, and the plurality of third image blocks, respectively, to obtain shape parameters of the pupil edge region image, shape parameters of the sampled image, and The shape parameter of the subsampled image;
  • a partitioning sub-module configured to respectively divide each first image block, each second image block and each third image block into a plurality of regions along a main diagonal direction to obtain a plurality of first image blocks respectively a partition, a plurality of partitions of each second image block, and a plurality of partitions of each third image block;
  • a third extraction sub-module configured to perform feature extraction on each of the first image blocks after the partitioning, each second image block after the partitioning, and each third image block after the partitioning, to obtain the frequency of the image of the pupil edge region Domain directional characteristics, frequency domain directional characteristics of the sampled image, and frequency domain directional characteristics of the subsampled image;
  • a fourth extraction sub-module configured to perform feature extraction on the plurality of first image blocks, the plurality of second image blocks, and the plurality of third image blocks respectively, to obtain frequency domain energy features of the pupil edge region image, and frequency domain of the sampled image Energy characteristics, as well as frequency domain energy characteristics of the subsampled image.
  • the second extraction submodule may include:
  • a first fitting submodule configured to respectively fit each first image block, each second image block, and each third image block by using a generalized Gaussian parameter model, to obtain a first image block a feature, a first feature of each second image block, and a first feature of each third image block, wherein the first feature comprises: a first parameter and a second parameter;
  • a second calculating submodule configured to respectively calculate a first feature of the plurality of first image blocks, a first feature of the plurality of second image blocks, and an average of the first features of the plurality of third image blocks to obtain a plurality of a first average of the first image block, a first average of the plurality of second image blocks, and a first average of the plurality of third image blocks;
  • a first sorting sub-module configured to perform an ascending ordering of the first parameter of the plurality of first image blocks, the first parameter of the plurality of second image blocks, and the first parameter of the plurality of third image blocks, respectively a second parameter of the first image block, a second parameter of the plurality of second image blocks, and a second parameter of the plurality of third image blocks are sorted in descending order;
  • a third calculation sub-module configured to respectively calculate a first feature of the first image block of the top pre-sorted number, a first feature of the second image block of the first preset number, and a pre-sorted pre-preparation And averaging the first features of the plurality of third image blocks to obtain a second average of the plurality of first image blocks, a second average of the plurality of second image blocks, and a plurality of third image blocks Second average;
  • a seventh processing submodule configured to respectively determine a first average value of the plurality of second image blocks and a plurality of seconds according to a first average value of the plurality of first image blocks and a second average value of the plurality of first image blocks a second average of the image block, and a first average of the plurality of third image blocks and a second average of the plurality of third image blocks, to obtain a shape parameter of the pupil edge region image, a shape parameter of the sample image, and two The shape parameter of the subsampled image.
  • the third extraction submodule may include:
  • a second fitting sub-module for utilizing a generalized Gaussian distribution for each partition of each first image block, each partition of each second image block, and each partition of each third image block Combining, obtaining a probability density of each partition of each first image block, a probability density of each partition of each second image block, and a probability density of each partition of each third image block;
  • a fourth calculation submodule for separately calculating a probability density of a plurality of partitions of each first image block, a probability density of a plurality of partitions of each second image block, and a plurality of partitions of each of the third image blocks a variance of the probability density, a second feature of each first image block, a second feature of each second image block, and a second feature of each third image block;
  • a fifth calculating submodule configured to respectively calculate a second feature of the plurality of first image blocks, a second feature of the plurality of second image blocks, and an average of the second features of the plurality of third image blocks to obtain a plurality of a third average of the first image block, a third average of the plurality of second image blocks, and a third average of the plurality of third image blocks;
  • a second sorting sub-module configured to perform a descending ordering on a second feature of the plurality of first image blocks, a second feature of the plurality of second image blocks, and a second feature of the plurality of third image blocks;
  • a sixth calculation sub-module configured to respectively calculate a second feature of the first image block of the top pre-ordered number, a second feature of the second image block of the first preset number, and a pre-sorted preset An average of the second features of the third image block of the number, a fourth average of the plurality of first image blocks, a fourth average of the plurality of second image blocks, and a fourth of the plurality of third image blocks average value;
  • An eighth processing submodule configured to respectively perform a third average of the plurality of first image blocks and a fourth average of the plurality of first image blocks, a third average of the plurality of second image blocks, and a plurality of second a fourth average of the image block, and a third average of the plurality of third image blocks and a fourth level of the plurality of third image blocks
  • the mean value gives the frequency domain direction feature of the pupil edge region image, the frequency domain direction feature of the sampled image, and the frequency domain direction feature of the subsampled image.
  • the fourth extraction submodule may include:
  • a fifth extraction submodule configured to perform energy extraction on each of the first image blocks, each of the second image blocks, and each of the third image blocks in an opposite direction, to obtain multiple energy of each of the first image blocks. a plurality of energy of each second image block, and a plurality of energy of each partition of each third image block;
  • a seventh calculation submodule for separately calculating a plurality of energies of each first image block, a plurality of energies of each second image block, and a difference of a plurality of energies of each partition of each third image block Obtaining a plurality of energy differences for each first image block, a plurality of energy differences for each second image block, and a plurality of energy differences for each partition of each third image block;
  • An eighth calculation sub-module configured to separately calculate a plurality of energy differences of each first image block, a plurality of energy differences of each second image block, and a plurality of energy differences of each partition of each third image block An average of the energy characteristics of each of the first image blocks, the energy characteristics of each of the second image blocks, and the energy characteristics of each of the third image blocks;
  • a ninth calculation sub-module configured to respectively calculate energy features of the plurality of first image blocks, energy features of the plurality of second image blocks, and average values of energy features of the plurality of third image blocks to obtain a plurality of first images a fifth average of the blocks, a fifth average of the plurality of second image blocks, and a fifth average of the plurality of third image blocks;
  • a third sorting sub-module configured to sort energy features of each first image block, energy features of each second image block, and energy features of each third image block;
  • a tenth calculation sub-module configured to separately calculate energy features of the first image block of the highest ranked number, respectively, and sort the energy features of the second image block with the highest preset number, and the preset number of the highest order An average of energy characteristics of the plurality of third image blocks, a sixth average of the plurality of first image blocks, a sixth average of the plurality of second image blocks, and a sixth average of the plurality of third image blocks value;
  • a ninth processing submodule configured to respectively perform a fifth average value of the plurality of first image blocks and a sixth average value of the plurality of first image blocks, a fifth average value of the plurality of second image blocks, and a plurality of second a sixth average of the image block, and a fifth average of the plurality of third image blocks and a sixth average of the plurality of third image blocks, to obtain a frequency domain energy characteristic of the pupil edge region image, and a frequency domain energy of the sampled image Features, as well as frequency domain energy characteristics of the subsampled image.
  • the detecting module 127 may include:
  • the screening module is configured to filter the first feature set and the second feature set by using compression estimation to obtain a feature set of the iris image.
  • the detecting module 127 may include:
  • a classification module configured to classify a feature set of the iris image by using a preset classifier, and obtain a classification result of the iris image to be detected
  • a fifth processing module configured to obtain a detection result according to the classification result of the iris image to be detected.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 13, including a processor 131, a communication interface 132, a memory 133, and a communication bus 134, wherein the processor 131, the communication interface 132, The memory 133 completes communication with each other through the communication bus 134;
  • a memory 133 configured to store a computer program
  • the processor 131 when used to execute the computer program stored in the memory 133, implements the training method of the face model according to any one of the foregoing embodiments provided by the embodiment of the present application, wherein the training method of the face model may include the steps :
  • the iris region image and the pupil edge region image are determined; the spatial region feature is extracted from the iris region image to obtain the first feature set, and the frequency domain feature extraction is performed on the pupil edge region image to obtain the second feature set; The first feature set and the second feature set perform feature screening, and the selected feature set is detected to obtain a detection result, wherein the detection result is used to represent whether the iris image is clear.
  • the processor of the electronic device runs the computer program stored in the memory to perform the training method of any face model provided by the embodiment of the present application, thereby enabling fuzzy detection of the iris image. It is easy to notice that since the iris region image and the pupil edge region image are simultaneously determined, and the first feature set and the second feature set are extracted from the two region images, the feature set representation is more comprehensive, and the detection accuracy is improved, further After extracting the first feature set and the second feature set, feature filtering of the two feature sets not only shortens the feature set, but also avoids feature information redundancy, improves accuracy, and solves the present problem.
  • Rainbow in technology The fuzzy detection method of the film image has a technical problem of low detection accuracy. Therefore, through the above embodiments of the present application, the multi-region multi-indicator method can be used to detect and improve system performance and robustness, so that the system quickly and friendlyly collects high-quality iris images.
  • the embodiment of the present application further provides a computer program, which is used to execute the method for detecting an iris image according to any one of the foregoing embodiments provided by the embodiments of the present application, wherein the training method of the face model Can include steps:
  • the iris region image and the pupil edge region image are determined; the spatial region feature is extracted from the iris region image to obtain the first feature set, and the frequency domain feature extraction is performed on the pupil edge region image to obtain the second feature set; The first feature set and the second feature set perform feature screening, and the selected feature set is detected to obtain a detection result, wherein the detection result is used to represent whether the iris image is clear.
  • the computer program performs the detection method of any iris image provided by the embodiment of the present application at runtime, and thus can realize the fuzzy detection of the iris image. It is easy to notice that since the iris region image and the pupil edge region image are simultaneously determined, and the first feature set and the second feature set are extracted from the two region images, the feature set representation is more comprehensive, and the detection accuracy is improved, further After extracting the first feature set and the second feature set, feature filtering of the two feature sets not only shortens the feature set, but also avoids feature information redundancy, improves accuracy, and solves the present problem. There is a technique for blur detection of iris images, and a technical problem of low detection accuracy. Therefore, through the above embodiments of the present application, the multi-region multi-indicator method can be used to detect and improve system performance and robustness, so that the system quickly and friendlyly collects high-quality iris images.
  • the embodiment of the present application provides a storage medium for storing a computer program, where the computer program is executed to perform the method for detecting an iris image according to any one of the foregoing embodiments provided by the embodiments of the present application, where
  • the training method of the face model may include the steps of:
  • the iris region image and the pupil edge region image are determined; the spatial region feature is extracted from the iris region image to obtain the first feature set, and the frequency domain feature extraction is performed on the pupil edge region image to obtain the second feature set; The first feature set and the second feature set perform feature screening, and the selected feature set is detected to obtain a detection result, wherein the detection result is used to represent whether the iris image is clear.
  • the storage medium stores a computer program for executing the detection method of any iris image provided by the embodiment of the present application at the time of operation, thereby enabling fuzzy detection of the iris image. It is easy to notice that since the iris region image and the pupil edge region image are simultaneously determined, and the first feature set and the second feature set are extracted from the two region images, the feature set representation is more comprehensive, and the detection accuracy is improved, further After extracting the first feature set and the second feature set, feature filtering of the two feature sets not only shortens the feature set, but also avoids feature information redundancy, improves accuracy, and solves the present problem. There is a technique for blur detection of iris images, and a technical problem of low detection accuracy. Therefore, through the above embodiments of the present application, the multi-region multi-indicator method can be used to detect and improve system performance and robustness, so that the system quickly and friendlyly collects high-quality iris images.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the storage medium includes instructions for causing a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种虹膜图像的检测方法和装置。其中,该方法包括:获取待检测的虹膜图像;从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集;对第一特征集和第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。本申请解决了现有技术中的虹膜图像的模糊检测方法,检测精度低的技术问题。

Description

虹膜图像的检测方法和装置
本申请要求于2016年9月19日提交中国专利局、申请号为201610833796.8发明名称为“虹膜图像的检测方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像检测领域,具体而言,涉及一种虹膜图像的检测方法和装置。
背景技术
虹膜识别是一种高安全性的虹膜识别技术。虹膜识别有着非常广泛的应用前景。虹膜图像采集是虹膜识别中最重要的基础步骤,所采集虹膜图像的质量优劣直接影响虹膜识别系统的性能。在所有的低质量虹膜图像中,模糊是一个非常严重的问题,虹膜图像模糊会直接导致虹膜识别过程中认假或拒真两类错误的出现。
然而对单帧虹膜图像的模糊检测是一个无参考的图像模糊评价问题,难度较大。现有方法大多基于全局图像或仅基于局部虹膜区域,进行检测,难以得到较为准确的结果。例如,采用全局图像分析方法对虹膜图像进行检测时,容易受到眼镜、睫毛和光斑等噪声影响,而已有的基于局部虹膜区域分析方法也有一些缺陷,这是由于每个人的虹膜纹理不尽相同,有些人天生虹膜纹理较少,已有的基于局部虹膜区域分析方法仅使用虹膜区域图像所提取的特征,容易忽略该类人群的清晰虹膜图像。
针对现有技术中的虹膜图像的模糊检测方法,检测精度低的问题,目前尚未提出有效的解决方案。
发明内容
本申请提供了一种虹膜图像的检测方法和装置,以至少解决现有技术中的虹膜图像的模糊检测方法,检测精度低的技术问题。
根据本申请实施例的一个方面,提供了一种虹膜图像的检测方法,包括:获取待检测的虹膜图像;从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边 缘区域图像进行频率域特征提取,得到第二特征集;对第一特征集和第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。
根据本申请实施例的另一方面,还提供了一种虹膜图像的检测装置,包括:获取模块,用于获取待检测的虹膜图像;确定模块,用于从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;提取模块,用于对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集;监测模块,用于对第一特征集和第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。
根据本申请实施例的一个方面,提供了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的计算机程序时,实现本申请实施例所提供的任一项所述的人脸模型的训练方法。
根据本申请实施例的一个方面,提供了一种计算机程序,所述计算机程序用于被运行以执行本申请实施例所提供的任一项所述的虹膜图像的检测方法。
根据本申请实施例的一个方面,提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序被运行以执行本申请实施例所提供的任一项所述的虹膜图像的检测方法。
在本申请实施例中,可以获取待检测的虹膜图像,从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像,对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集,并对第一特征集和第二特征集进行检测,得到检测结果,从而实现虹膜图像的模糊检测。容易注意到的是,由于同时确定了虹膜区域图像和瞳孔边缘区域图像,并从两个区域图像中提取到第一特征集和第二特征集,使特征集表征更加全面,提升检测精度,进一步,在提取到第一特征集和第二特 征集之后,对两个特征集进行特征筛选,不仅使特征集精简加快检测速度,而且避免了特征信息冗余,提高了准确度,进而解决了现有技术中的虹膜图像的模糊检测方法,检测精度低的技术问题。因此,通过本申请,可以通过多区域多指标的方法进行检测,达到提升系统性能和鲁棒性,使得系统快速友好的采集到高质量虹膜图像的效果。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种虹膜图像的检测方法的流程图;
图2是根据本申请实施例的一种在一种实现方式中瞳孔定位的示意图;
图3是根据本申请实施例的一种在一种实现方式中虹膜待定区域图像的示意图;
图4a是根据本申请实施例的一种在一种实现方式中左侧虹膜区域图像的示意图;
图4b是根据本申请实施例的一种在一种实现方式中右侧虹膜区域图像的示意图;
图5是根据本申请实施例的一种在一种实现方式中瞳孔边缘待定区域图像的示意图;
图6a是根据本申请实施例的一种在一种实现方式中左瞳孔边缘区域图像的示意图;
图6b是根据本申请实施例的一种在一种实现方式中右瞳孔边缘区域图像的示意图;
图7是根据本申请实施例的一种在一种实现方式中虹膜区域图像的示意图;
图8是根据本申请实施例的一种在一种实现方式中虹膜区域图像的MSCN系数的示意图;
图9是根据本申请实施例的一种在一种实现方式中虹膜区域图像的MSCN系数的广义高斯分拟合的示意图;
图10是根据本申请实施例的一种在一种实现方式中瞳孔边缘区域图像的采集和分块的示意图;
图11是根据本申请实施例的一种在一种实现方式中瞳孔边缘区域图像的DCT特征提取的示意图;以及
图12是根据本申请实施例的一种虹膜图像的检测装置的示意图;
图13为本发明实施例所提供的一种电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例1
根据本申请实施例,提供了一种虹膜图像的检测方法的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令 的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本申请实施例的一种虹膜图像的检测方法的流程图,如图1所示,该方法包括如下步骤:
步骤S102,获取待检测的虹膜图像。
具体的,上述的虹膜图像可以包括瞳孔、虹膜、巩膜、眼皮、睫毛,即人眼区域图像。
在一种实现方式中,为了对虹膜图像进行模糊检测,可以采集待检测的灰度虹膜图像。
也就是说,为了对虹膜图像进行模糊检测,上述待检测的虹膜图像可以为灰度图像,本申请实施例中可以称之为灰度虹膜图像。为了对虹膜图像进行模糊检测,在一种实现方式中,本申请实施例中所述的虹膜图像,均可以为灰度虹膜图像。
步骤S104,从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像。
在一种实现方式中,上述的虹膜区域图像可以是虹膜图像中的虹膜区域所在图像,瞳孔边缘区域图像可以是虹膜图像中的瞳孔边缘区域所在图像,即虹膜内边缘区域所在图像,可以包括虹膜区域所在图像和瞳孔区域所在图像。图像的锐变边缘是最容易受到模糊影响的区域,在虹膜图像中,最明显的锐变边缘就是瞳孔边缘,并且该区域不易受到噪声影响,所以,理想环境下,瞳孔边缘是最有利于判断虹膜图像模糊与否的图像信息。
也就是说,在理想环境下,虹膜图像中瞳孔边缘所在区域中所包含的图像信息为:最有利于判断虹膜图像模糊与否的图像信息。
在一种实现方式中,在获取到灰度虹膜图像之后,可以从虹膜图像中选择瞳孔边缘的附近区域作为一个感兴趣区域(Region Of Interest,ROI),为了将瞳孔边缘清晰度不明显的图像也能够判断出来,还可以选择虹膜区域作为另一个感兴趣区域,得到虹膜区域图像和瞳孔边缘区域图像。
步骤S106,对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集。
在一种实现方式中,可以采用多种特征提取方法,对两个ROI提取多种 特征,例如,可以提取虹膜区域的空间域特征和瞳孔边缘区域的频率域特征,得到用于评价虹膜图像模糊程度的特征集,即上述的第一特征集和第二特征集。
换言之,上述的两个ROI可以包括瞳孔边缘区域和虹膜区域。
步骤S108,对第一特征集和第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。
在一种实现方式中,在采用多种特征提取方法,提取到第一特征集和第二特征集之后,可以对提取到的第一特征集和第二特征集进行特征筛选,得到最终特征集,并根据最终特征集进行检测,检测采集到的虹膜图像是否清晰,从而得到检测结果。
通过本申请上述实施例,可以获取待检测的虹膜图像,从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像,对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集,并对第一特征集和第二特征集进行检测,得到检测结果,从而实现虹膜图像的模糊检测。容易注意到的是,由于同时确定了虹膜区域图像和瞳孔边缘区域图像,并从两个区域图像中提取到第一特征集和第二特征集,使得特征集表征更加全面,提升检测精度,进一步,在提取到第一特征集和第二特征集之后,对两个特征集进行特征筛选,不仅使特征集精简加快检测速度,而且避免了特征信息冗余,提高了准确度,进而解决了现有技术中的虹膜图像的模糊检测方法,检测精度低的技术问题。因此,通过本申请上述实施例,可以通过多区域多指标的方法进行检测,达到提升系统性能和鲁棒性,使得系统快速友好的采集到高质量虹膜图像的效果。
在一种实现方式中,在本申请上述实施例中,步骤S104,从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像,可以包括:
步骤S1042,对虹膜图像进行定位,得到瞳孔的半径和圆心坐标。
在一种实现方式中,为了能够得到虹膜区域图像和瞳孔边缘区域图像,可以在获取到待检测的灰度虹膜图像之后,使用径向对称变换对虹膜图像进行瞳孔粗定位,获得瞳孔的半径和圆心坐标,如图2所示。
步骤S1044,根据半径,圆心坐标和第一预设范围,得到第一待定区域图像和第二待定区域图像,并根据半径,圆心坐标和第二预设范围,得到第三待定区域图像和第四待定区域图像,其中,第一待定区域图像和第二待定区域图像位于虹膜区域,第三待定区域图像和第四待定区域图像位于瞳孔边缘区域。
换言之,上述第一待定区域图像和第二待定区域图像可以为灰度虹膜图像中虹膜区域所在图像,上述第三待定区域图像和第四待定区域图像可以为灰度虹膜图像中瞳孔边缘区域所在图像。
在一种实现方式中,上述的第一预设范围可以是预先设定的虹膜区域范围,上述的第二预设范围可以是预先设定的瞳孔边缘区域范围,上述的第一预设范围与上述的第二预设范围均可以根据实际需要进行设定,也可以通过多次实验,选择检测效果最好的范围进行设定,例如,上述的第一预设范围可以是瞳孔水平偏下方两侧对称的60*55子区域,上述的第二预设范围可以是瞳孔水平方向两侧对称的20*40子区域,但不仅限于此,上述的第一预设范围也可以是瞳孔水平偏下方两侧非对称的60*55子区域,上述的第二预设范围也可以是瞳孔水平方向两侧非对称的20*40子区域。
在一种实现方式中,在确定瞳孔的半径和圆心坐标之后,可以选取瞳孔水平偏下方向两侧对称60*55子区域作为待定区域,如图3中的两个方框所示,得到第一待定区域图像和第二待定区域图像,如图4a和图4b所示,图中的不规则圆圈表示虹膜的特征纹理。可以选取瞳孔水平方向两侧对称20*40子区域作为待定区域,如图5中的两个方框所示,得到第三待定区域图像和第四待定区域图像,如图6a和图6b所示。
步骤S1046,从第一待定区域图像和第二待定区域图像中获取满足第一预设条件的区域图像,得到虹膜区域图像,并从第三待定区域图像和第四待定区域图像中获取满足第二预设条件的区域图像,得到瞳孔边缘区域图像。
在一种实现方式中,针对虹膜图像中很多区域容易受到噪声影响的问题,在选取了第一待定区域图像、第二待定区域图像、第三待定区域图像和第四待定区域图像之后,可以对四个待定区域图像进行筛选,从第一待定区域图像和第二待定区域图像筛选满足筛选条件的区域图像作为虹膜区域图像,并 从第三待定区域图像和第四待定区域图像筛选满足筛选条件的区域图像作为瞳孔边缘区域图像。
在一种实现方式中,在本申请上述实施例中,步骤S1046,从第一待定区域图像和第二待定区域图像中获取满足第一预设条件的区域图像,得到虹膜区域图像,可以包括:
步骤S112,判断第一待定区域图像和第二待定区域图像是否含有噪声。
在一种实现方式中,在选取了第一待定区域图像和第二待定区域图像之后,可以通过阈值筛选出光斑和睫毛噪声较少的区域图像作为虹膜区域图像,可以使用如下公式判断第一待定区域图像和第二待定区域图像是否含有噪声:
Figure PCTCN2017102265-appb-000001
其中,Iun(i,j)为输入的待定区域图像的像素点,即第一待定区域图像或者第二待定区域图像的像素点,M和N为待定区域图像的高和宽,Tmin为瞳孔边界和睫毛噪声灰度阈值,Tmax光斑噪声灰度阈值,小于Tmin可以判断待定区域图像中含有瞳孔边界和睫毛噪声,大于Tmax可以判断判断待定区域图像中含有光斑噪声。
也就是说,当待定区域图像中的像素点的像素值小于Tmin可以确定该像素点含有瞳孔边界和睫毛噪声,当待定区域图像中的像素点的像素值大于Tmax可以确定该像素点含有瞳孔边界和睫毛噪声。
如果通过公式计算得到h1=1,则确定该待定区域图像不含有噪声,如果通过公式计算得到h1=0,则确定该待定区域图像含有噪声。
也就是说,如果通过公式计算得到h1=1,即待定区域图像中任意像素点的像素值在Tmin和Tmax之间,则确定该待定区域图像不含有噪声;反之,如果通过公式计算得到h1=0,即待定区域图像中所有像素点的像素值均不在Tmin和Tmax之间,则确定该待定区域图像含有噪声。
步骤S114,如果第一待定区域图像和第二待定区域图像均含有噪声,或者第一待定区域图像和第二待定区域图像均不含噪声,则将第一待定区域图像和第二待定区域图像作为虹膜区域图像。
在一种实现方式中,如果通过公式计算得到第一待定区域图像对应的 h1=1,且第二待定区域图像对应的h1=1,则确定两个待定区域图像均不含噪声,因此两个待定区域图像都可以作为虹膜区域图像;如果通过公式计算得到第一待定区域图像对应的h1=0,且第二待定区域图像对应的h1=0,则确定两个待定区域图像均含有噪声,因此两个待定区域图像都可以作为虹膜区域图像。
步骤S116,如果第一待定区域图像含有噪声,且第二待定区域图像不含噪声,则将第一待定区域图像替换为第二待定区域图像。
在一种实现方式中,如果通过公式计算得到第一待定区域图像对应的h1=0,且第二待定区域图像对应的h1=1,则确定第一待定区域图像含有噪声,而第二待定区域图像不含噪声,因此可以将第一待定区域图像替换为第二待定区域图像,即使用第二待定区域图像中像素点的灰度值替换第一待定区域图像中像素点的灰度值(像素点的坐标值保持不变),并将替换后的第一待定区域图像和第二待定区域图像作为虹膜区域图像。
也就是说,将第一待定区域图像替换为第二待定区域图像的过程,可以是:使用第二待定区域图像中像素点的灰度值,替换第一待定区域图像中与上述第二待定区域图像中像素点位置相同的像素点的灰度值。
步骤S118,如果第一待定区域图像不含噪声,且第二待定区域图像含有噪声,则将第二待定区域图像替换为第一待定区域图像。
在一种实现方式中,如果通过公式计算得到第一待定区域图像对应的h1=1,且第二待定区域图像对应的h1=0,则确定第一待定区域图像不含噪声,而第二待定区域图像含有噪声,因此可以将第二待定区域图像替换为第一待定区域图像,即用第一待定区域图像中像素点的灰度值替换第二待定区域图像中像素点的灰度值(像素点的坐标值保持不变),并将替换后的第二待定区域图像和第一待定区域图像作为虹膜区域图像。
也就是说,将第二待定区域图像替换为第一待定区域图像的过程,可以是:使用第一待定区域图像中像素点的灰度值,替换第二待定区域图像中与上述第一待定区域图像中像素点位置相同的像素点的灰度值。
此处可以理解的是,虹膜区域图像包含两个区域图像,即左侧虹膜区域图像ROI1和右侧虹膜区域图像ROI2。
通过上述步骤S112至步骤S118,可以通过将所述第一待定区域图像和所述第二待定区域图像分别与第一预设噪声灰度阈值范围进行比较,判断是否含有噪声,筛选满足第一预设条件的待定区域图像作为虹膜区域图像,减少虹膜图像中的噪声影响,从而提升虹膜图像的检测准确度。
其中,上述第一预设噪声灰度阈值范围可以为上述的Tmin到Tmax
在一种实现方式中,在本申请上述实施例中,步骤S1046,从第三待定区域图像和第四待定区域图像中获取满足第二预设条件的区域图像,得到瞳孔边缘区域图像,可以包括:
步骤S122,判断第三待定区域图像是否含有光斑噪声。
具体的,由于光斑噪声一般仅分布在瞳孔边缘的一侧或瞳孔内部,可以通过判断第三待定区域图像是否含有光斑噪声,从而确定瞳孔边缘区域图像。
在一种实现方式中,在选取了第三待定区域图像和第四待定区域图像之后,可以通过阈值筛选光斑噪声较少的区域作为瞳孔边缘区域,使用如下公式判断待定区域是否含有光斑噪声:
Figure PCTCN2017102265-appb-000002
其中,Iun(i,j)为输入的待定区域图像的像素点,即第三待定区域图像或者第四待定区域图像,M′和N′为待定区域图像的高和宽,Tmax光斑噪声阈值。
如果通过公式计算得到h2=1,则确定该待定区域图像不含有光斑噪声,如果通过公式计算得到h2=0,则确定该待定区域图像含有光斑噪声。
如果通过公式计算得到h2=1,即待定区域图像中任意像素点的像素值在0和Tmax之间,则确定该待定区域图像不含有光斑噪声;反之,如果通过公式计算得到h1=0,即待定区域图像中所有像素点的像素值均不在0和Tmax之间,则确定该待定区域图像含有光斑噪声。
步骤S124,如果第三待定区域图像含有光斑噪声则将第四待定区域图像作为瞳孔边缘区域图像。
在一种实现方式中,如果通过公式计算得到第三待定区域图像的h2=0则确定第三待定区域图像含有光斑噪声,因此第四待定区域图像可以作为瞳孔边缘区域图像。
步骤S126,如果第三待定区域图像不含光斑噪声则将第三待定区域图像作为瞳孔边缘区域图像。
在一种实现方式中,如果通过公式计算得到第三待定区域图像的h2=1,则确定第三待定区域图像不含光斑噪声,因此第三待定区域图像可以作为瞳孔边缘区域图像。
此处可以理解的是,瞳孔边缘区域图像仅包含一个区域图像,即瞳孔边缘区域图像ROI3。
通过上述步骤S122至步骤S126,可以通过将所述第三待定区域图像和所述第四待定区域图像与第二预设噪声灰度阈值范围进行比较,判断是否含有光斑噪声,筛选满足第二预设条件的待定区域图像作为瞳孔边缘区域图像,减少虹膜图像中的噪声影响,从而提升虹膜图像的检测准确度。
其中,上述第二预设噪声灰度阈值范围可以为上述的0到Tmax
在一种实现方式中,在本申请上述实施例中,步骤S106,对虹膜区域图像进行空间域特征提取,得到第一特征集,包括:
步骤S132,计算虹膜区域图像的去均值对比度归一化系数,并利用广义高斯分布,对去均值对比度归一化系数进行拟合,得到特征向量。
在一种实现方式中,上述的去均值对比度归一化系数可以是MSCN(Mean Subtracted Contrast Normalized的简写)系数。由于考虑MSCN系数有局部去均值处理,因此可以选择零均值的广义高斯分布对系数进行拟合(即μ=0)。广义高斯分布(Generalized Gaussian Distribution,GGD)具有较广的分布范围,可以用来捕获MSCN系数经验概率曲线的尾部响应差异较大的特性。广义高斯分布的定义为:
Figure PCTCN2017102265-appb-000003
其中:γ为形状参数,尺度参数
Figure PCTCN2017102265-appb-000004
Γ为伽马函数,伽马函数的定义为:
Figure PCTCN2017102265-appb-000005
在一种实现方式中,可以根据如下公式计算如图7所示的虹膜区域图像的MSCN系数,如图8所示,
Figure PCTCN2017102265-appb-000006
其中,i∈{1,2,...,M},j∈ {1,2,...,N},M为虹膜区域图像的高,N为虹膜区域图像的宽,均值
Figure PCTCN2017102265-appb-000007
标准差
Figure PCTCN2017102265-appb-000008
其中,ωk,l是由一个二维标准化的高斯函数生成的加权系数模板,K=L=3。在计算得到虹膜区域图像的MSCN系数之后,可以利用零均值广义高斯分布参数模型拟合MSCN系数概率密度曲线,如图9所示,根据如下公式计算虹膜图像的方差:
Figure PCTCN2017102265-appb-000009
以及数学期望:
Figure PCTCN2017102265-appb-000011
概率密度:
Figure PCTCN2017102265-appb-000012
取形状参数γ=0.2:0.001:10,即从γ=0.2开始,每隔0.001进行取值,直到γ=10为止,计算广义高斯参数比反函数
Figure PCTCN2017102265-appb-000013
根据arg min{|ρ-r(γ)|}得到误差最小的γ,从而得到特征向量γ,其中,arg min表示使目标函数取值最小值时的变量值。
也就是说,上述目标函数可以为“|ρ-r(γ)|”。
步骤S134,计算虹膜区域图像的水平方向和竖直方向的差分信号矩阵,并对水平方向和竖直方向的差分信号矩阵进行分块处理,得到子特征集,其中,子特征集至少包括:差分信号整体活跃度,局部分块活跃度和低强度信号个数。
在一种实现方式中,可以计算虹膜区域图像的水平方向和竖直方向的差分信号矩阵,并对虹膜区域图像的水平方向和竖直方向的差分信号进行分块处理,得到虹膜区域图像的差分信号整体活跃度,局部分块活跃度和低强度信号个数。可以通过如下公式计算虹膜区域水平方向和竖直方向的差分信号矩阵:
Figure PCTCN2017102265-appb-000014
其中,k=1表示竖直方向的差分信号,k=2表示水平方向的差分信号。
步骤S136,根据特征向量和子特征集,得到第一特征集。
在一种实现方式中,可以根据特征向量γ,差分信号整体活跃度,局部分块活跃度和低强度信号个数,得到虹膜区域的空间域特征集,即上述的第一特征集。
此处可以理解的是,通过使用低强度信号个数作为差分信号的第三个特征,可以提升检测的准确度。
在一种实现方式中,在本申请上述实施例中,步骤S134,对水平方向和竖直方向的差分信号矩阵进行分块处理,得到子特征集,可以包括:
步骤S1342,按照水平预设像素和竖直预设像素,分别对水平方向和竖直方向的差分信号矩阵进行分块处理,得到多个分块。
具体的,由于虹膜纹理比较丰富精细,上述水平预设像素和竖直预设像素可以是以3像素为间隔。
在一种实现方式中,可以按照水平3像素为间隔对水平方向的差分信号进行分块处理,并按照竖直3像素为间隔对竖直方向的差分信号进行分块处理。
步骤S1344,计算每个分块的块边界平均梯度,得到水平方向的整体活跃度和竖直方向的整体活跃度,并计算水平方向的整体活跃度和竖直方向的整体活跃度的平均值,得到差分信号整体活跃度。
在一种实现方式中,可以通过如下公式计算虹膜区域图像的块边界平均梯度Bk,作为水平方向的整体活跃度和竖直方向的整体活跃度:
Figure PCTCN2017102265-appb-000015
可以对两个方向得到的整体活跃度取平均值,得到特征分量
Figure PCTCN2017102265-appb-000016
作为差分信号整体活跃度。
步骤S1346,提取每个分块的块内平均差分的绝对值,得到水平方向的局部分块活跃度和竖直方向的局部分块活跃度,并计算水平方向的局部分块活跃度和竖直方向的局部分块活跃度的平均值,得到局部分块活跃度。
在一种实现方式中,可以通过如下公式提取块内平均差分的绝对值Ak, 作为水平方向的局部分块活跃度和竖直方向的局部分块活跃度:
Figure PCTCN2017102265-appb-000017
可以对两个方向得到的局部分块活跃度取平均值,得到特征分量
Figure PCTCN2017102265-appb-000018
作为局部分块活跃度。
步骤S1348,分别从水平方向和竖直方向的差分信号矩阵中,获取差分信号小于预设值的个数,得到水平方向的低强度信号个数和竖直方向的低强度信号个数,并计算水平方向的低强度信号个数和竖直方向的低强度信号个数的平均值,得到低强度信号个数。
在一种实现方式中,上述预设值可以是2。
其中,上述步骤S1348,可以是:从水平方向和竖直方向的差分信号矩阵中,获得水平方向上小于预设值的差分信号的个数,作为水平方向的低强度信号个数;并从水平方向和竖直方向的差分信号矩阵中,获得竖直方向上小于预设值的差分信号的个数,作为竖直方向的低强度信号个数。
在一种实现方式中,可以通过如下公式计算水平和竖直方向差分信号小于2的个数,作为水平方向的低强度信号个数和竖直方向的低强度信号个数:
Figure PCTCN2017102265-appb-000019
其中:
Figure PCTCN2017102265-appb-000020
可以对两个方向得到的低强度信号个数取平均值,得到特征分量
Figure PCTCN2017102265-appb-000021
作为低强度信号个数。
此处可以理解的是,最终对虹膜区域图像提取到如下空间域特征:
SpatROI1=(γR1,AR1,BR1,Z*R1)为左侧虹膜区域图像提取到的空间域特征;
SpatROI2=(γR2,AR2,BR2,Z*R2)为右侧虹膜区域图像提取到的空间域特征;
SpatROI1和SpatROI2即上述的虹膜区域图像的第一特征集。
在一种实现方式中,在本申请上述实施例中,步骤S106,对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集,可以包括:
步骤S142,对瞳孔边缘区域图像进行两次下采样,得到采样图像和二次采样图像。
在一种实现方式中,可以对瞳孔边缘区域图像进行低通滤波器滤波后,进行两次下采样,分别得到采样图像和二次采样图像。
在一种实现方式中,可以对瞳孔边缘区域图像进行低通滤波器滤波后,进行两次下采样,其中,进行第一次下采样后可以得到采样图像,对采样图像再进行一次下采样之后,可以得到二次采样图像。
步骤S144,分别对瞳孔边缘区域图像,采样图像和二次采样图像进行分块,得到多个第一图像块,多个第二图像块和多个第三图像块。
在一种实现方式中,可以对瞳孔边缘区域图像、采样图像和二次采样图像进行分块,得到每个图像的多个图像块。
其中,上述步骤S144,可以包括:对瞳孔边缘区域图像进行分块,得到多个第一图像块;且对采样图像进行分块,得到多个第二图像块,且对二次采样图像进行分块,得到多个第三图像块。
步骤S146,分别对每个第一图像块,每个第二图像块和每个第三图像块进行离散余弦转换,得到处理后的多个第一图像块,处理后的多个第二图像块和处理后的多个第三图像块。
其中,上述步骤S146,可以包括:分别对每个第一图像块进行离散余弦转换,得到处理后的多个第一图像块;且分别对每个第二图像块进行离散余弦转换,得到处理后的多个第二图像块;且分别对每个第三图像块进行离散余弦转换,得到处理后的多个第三图像块。
在一种实现方式中,可以对瞳孔边缘区域图像、采样图像和二次采样图像进行DCT(Discrete Cosine Transformation,离散余弦转换)。例如,如图10所示,以采样图像为例:可以对采样图像进行5*5分块,得到一个5*5第二图像块样例,对5*5第二图像块样例进行DCT处理,可以得到一个5*5的DCT特征矩阵。
步骤S148,分别对处理后的多个第一图像块,处理后的多个第二图像块和处理后的多个第三图像块进行特征提取,得到瞳孔边缘区域图像的第二特征集,采样图像的第二特征集和二次采样图像的第二特征集,其中,第二特 征集至少包括:形状参量,频率域方向特征和频率域能量特征。
在一种实现方式中,可以对每个图像块进行特征提取,得到瞳孔边缘图像的频率域特征,采样图像的频率域特征,以及二次采样图像的频率域特征,即上述的第二特征集。
其中,上述步骤S148,可以包括:分别对处理后的多个第一图像块进行特征提取,得到瞳孔边缘区域图像的第二特征集;且分别对处理后的多个第二图像块进行特征提取,得到采样图像的第二特征集;且分别对处理后的多个第三图像块进行特征提取,得到二次采样图像的第二特征集。
在一种实现方式中,在本申请上述实施例中,步骤S142,对瞳孔边缘区域图像进行两次下采样,得到采样图像和二次采样图像,可以包括:
步骤S1422,利用第一低通滤波器对瞳孔边缘区域图像进行滤波,并对滤波后的瞳孔边缘区域图像进行下采样,得到采样图像。
步骤S1424,利用第二低通滤波器对采样图像进行滤波,并对滤波后的采样图像进行下采样,得到二次采样图像。
具体的,上述第一低通滤波器和第二低通滤波器根据需要进行设定,两个低通滤波器可以相同。
在一种实现方式中,如图11所示,20*40的瞳孔边缘区域图像可以通过第一低通滤波器滤波后进行下采样,得到10*20的采样图像,再通过第二低通滤波器滤波后进行下采样,得到5*10的二次采样图像。
在一种实现方式中,在本申请上述实施例中,步骤S144,分别对瞳孔边缘区域图像,采样图像和二次采样图像进行分块,得到多个第一图像块,多个第二图像块和多个第三图像块,可以包括:
步骤S1442,按照第一预设分块大小,对瞳孔边缘区域图像进行分块处理,得到多个第一图像块。
具体的,上述的第一预设分块大小可以是9*9分块大小。
步骤S1444,按照第二预设分块大小,对采样图像进行分块处理,得到多个第二图像块。
具体的,上述的第二预设分块大小可以是5*5分块大小。
步骤S1446,按照第三预设分块大小,对二次采样图像进行分块处理,得 到多个第三图像块。
具体的,上述的第三预设分块大小可以是3*3分块大小。
在一种实现方式中,如图11所示,可以采用以9*9像素对瞳孔边缘区域图像进行分块,得到15个第一图像块,其中存在8个大小为9*9的第一图像块;采用以5*5像素对采样图像进行分块,得到8个第二图像5*5块;采用以3*3像素对二次采样图像进行分块,得到8个第三图像块,其中3个3*3第三图像块。
也就是说,如图11所示,可以采用以9*9像素对瞳孔边缘区域图像进行分块,可以得到15个第一图像块,其中,存在8个大小为9*9的第一图像块;采用以5*5像素对采样图像进行分块,可以得到8个大小为5*5的第二图像块;采用以3*3像素对二次采样图像进行分块,得到8个第三图像块,其中,存在3个大小为3*3的第三图像块。其中,本申请实施例中,可以称上述大小为9*9的第一图像块为9*9第一图像块,可以称上述大小为5*5的第二图像块为5*5第二图像块,可以称上述大小为3*3的第三图像块为3*3第三图像块。
在一种实现方式中,在本申请上述实施例中,步骤S148,分别对处理后的多个第一图像块,处理后的多个第二图像块和处理后的多个第三图像块进行特征提取,得到瞳孔边缘区域图像的第二特征集,采样图像的第二特征集和二次采样图像的第二特征集,可以包括:
步骤S1482,分别对多个第一图像块,多个第二图像块和多个第三图像块进行特征提取,得到瞳孔边缘区域图像的形状参量,采样图像的形状参量,以及二次采样图像的形状参量。
其中,上述步骤S1482,可以包括:分别对多个第一图像块进行特征提取,得到瞳孔边缘区域图像的形状参量;且分别对多个第二图像块进行特征提取,得到采样图像的形状参量;且分别对每个第三图像块进行特征提取,得到二次采样图像的形状参量。
在一种实现方式中,可以对每个9*9第一图像块的DCT特征矩阵进行处理,得到每个9*9第一图像块的特征向量,将所有9*9第一图像块提取到的特征向量进行汇总,得到瞳孔边缘区域图像的形状参量;对每个5*5第二图 像块的DCT特征矩阵进行处理,得到每个5*5第二图像块的特征向量,将所有5*5第二图像块提取到的特征向量进行汇总,得到采样图像的形状参量;对每个3*3第三图像块的DCT特征矩阵进行处理,得到每个3*3第三图像块的特征向量,将所有3*3第三图像块提取到的特征向量进行汇总,得到二次采样图像的形状参量。
步骤S1484,分别将每个第一图像块,每个第二图像块和每个第三图像块沿主对角线方向划分为多个区域,得到每个第一图像块的多个分区,每个第二图像块的多个分区,以及每个第三图像块的多个分区。
其中,上述步骤S1484,可以包括:分别将每个第一图像块沿其主对角线方向划分为多个区域,得到每个第一图像块的多个分区,即得到分块后的每个第一图像块;且分别将每个第二图像块沿其主对角线方向划分为多个区域,得到每个第二图像块的多个分区,即得到分块后的每个第二图像块;且分别将每个第三图像块沿其主对角线方向划分为多个区域,得到每个第三图像块的多个分区,即得到分块后的每个第三图像块。
在一种实现方式中,为了能够得到频率方向特征,可以按照预设的分割方式,将每个9*9第一图像块的DCT特征矩阵,每个5*5第二图像块的DCT特征矩阵,以及每个3*3第三图像块的DCT特征矩阵分别进行方向分区,分为多个方向分区。如图10所示,可以沿着主对角线的方向,以30度、60度和90度进行分区,可以分成三个方向分区,即图10中的第一方向分区,第二方向分区和第三方向分区。
步骤S1486,分别对分区后的每个第一图像块,分区后的每个第二图像块和分区后的每个第三图像块进行特征提取,得到瞳孔边缘区域图像的频率域方向特征,采样图像的频率域方向特征,以及二次采样图像的频率域方向特征。
其中,上述步骤S1486,可以包括:分别对分区后的每个第一图像块进行特征提取,得到瞳孔边缘区域图像的频率域方向特征;且分别对分区后的每个第二图像块进行特征提取,得到采样图像的频率域方向特征;且分别对分区后的每个第三图像块进行特征提取,得到二次采样图像的频率域方向特征。
在一种实现方式中,在每个9*9第一图像块的DCT特征矩阵,每个5*5 第二图像块的DCT特征矩阵,以及每个3*3第三图像块的DCT特征矩阵进行方向分区之后,可以对每个9*9第一图像块的分区进行提取概率密度,并计算所有9*9第一图像块的分区的方差,得到瞳孔边缘区域图像的频率域方向特征;可以对每个5*5第二图像块的分区进行提取概率密度,并计算所有5*5第二图像块的分区的方差,得到采样图像的频率域方向特征;可以对每个3*3第三图像块的分区进行提取概率密度,并计算所有3*3第三图像块的分区的方差,得到二次采样图像的频率域方向特征。
步骤S1488,分别对多个第一图像块,多个第二图像块和多个第三图像块进行特征提取,得到瞳孔边缘区域图像的频率域能量特征,采样图像的频率域能量特征,以及二次采样图像的频率域能量特征。
其中,上述步骤S1488,可以包括:分别对多个第一图像块进行特征提取,得到瞳孔边缘区域图像的频率域能量特征;且分别对多个第二图像块进行特征提取,得到采样图像的频率域能量特征,且分别对多个第三图像块进行特征提取,得到二次采样图像的频率域能量特征。
在一种实现方式中,可以对每个9*9第一图像块的DCT特征矩阵提取能量特征,得到每个9*9第一图像块的能量特征,将所有9*9第一图像块提取到的能量特征,得到瞳孔边缘区域图像的频率域能量特征;对每个5*5第二图像块的DCT特征矩阵提取能量特征,得到每个5*5第二图像块的能量特征向量,将所有5*5第二图像块提取到的能量特征向量进行汇总,得到采集图像的频率域能量特征;对每个3*3第三图像块的DCT特征矩阵提取能量特征,得到每个3*3第三图像块的能量特征向量,将所有3*3第三图像块提取到的能量特征向量进行汇总,得到二次采样图像的频率域能量特征。
在一种实现方式中,在本申请上述实施例中,步骤S1482,分别对多个第一图像块,多个第二图像块和多个第三图像块进行特征提取,得到瞳孔边缘区域图像的形状参量,采样图像的形状参量,以及二次采样图像的形状参量,可以包括:
步骤S150,利用广义高斯参数模型,分别对每个第一图像块,每个第二图像块和每个第三图像块进行拟合,得到每个第一图像块的第一特征,每个第二图像块的第一特征,以及每个第三图像块的第一特征,其中,第一特征 包括:第一参数和第二参数。
其中,上述利用广义高斯参数模型,分别对每个第一图像块,每个第二图像块和每个第三图像块进行拟合,得到每个第一图像块的第一特征,每个第二图像块的第一特征,以及每个第三图像块的第一特征的步骤,可以包括:利用广义高斯参数模型,对每个第一图像块进行拟合,得到每个第一图像块的第一特征;且利用广义高斯参数模型,对每个第二图像块进行拟合,得到每个第二图像块的第一特征;且利用广义高斯参数模型,对每个第三图像块进行拟合,得到每个第三图像块的第一特征。
在一种实现方式中,上述的广义高斯参数模型可以为上述的广义高斯分布,第一参数可以是广义高斯分布的形状参数γ,第二参数可以是广义高斯分布的概率密度ρ。
步骤S152,分别计算多个第一图像块的第一特征,多个第二图像块的第一特征,以及多个第三图像块的第一特征的平均值,得到多个第一图像块的第一平均值,多个第二图像块的第一平均值,以及多个第三图像块的第一平均值。
其中,上述步骤S152,可以包括:计算多个第一图像块的第一特征的平均值,作为多个第一图像块的第一平均值;且计算多个第二图像块的第一特征的平均值,作为多个第二图像块的第一平均值;且计算多个第三图像块的第一特征的平均值,作为多个第三图像块的第一平均值。
在一种实现方式中,上述的第一平均值包括:第一参数的平均值和第二参数的平均值。
步骤S154,分别对多个第一图像块的第一参数,多个第二图像块的第一参数,以及多个第三图像块的第一参数进行升序排序,并对多个第一图像块的第二参数,多个第二图像块的第二参数,以及多个第三图像块的第二参数进行降序排序。
其中,上述步骤S154,可以包括:对多个第一图像块的第一参数进行升序排序,对多个第二图像块的第一参数进行升序排序,对多个第三图像块的第一参数进行升序排序;且,对多个第一图像块的第二参数进行降序排序,对多个第二图像块的第二参数进行降序排序,对多个第三图像块的第二参数 进行降序排序。
步骤S156,分别计算排序最前的预设个数的第一图像块的第一特征,排序最前的预设个数的第二图像块的第一特征,以及排序最前的预设个数的多个第三图像块的第一特征的平均值,得到多个第一图像块的第二平均值,多个第二图像块的第二平均值,以及多个第三图像块的第二平均值。
其中,上述步骤S156,可以包括:基于多个第一图像块的第一参数进行升序排序所得的队列,计算该队列中排序最前的预设个数个第一图像块的第一参数的平均值A,且基于多个第一图像块的第二参数进行降序排序所得的队列,计算该队列中排序最前的预设个数个第一图像块的第二参数的平均值B,将上述两个平均值A和B作为多个第一图像块的第二平均值;
基于多个第二图像块的第一参数进行升序排序所得的队列,计算该队列中排序最前的预设个数个第二图像块的第一参数的平均值C,且基于多个第二图像块的第二参数进行降序排序所得的队列,计算该队列中排序最前的预设个数个第二图像块的第二参数的平均值D,将上述两个平均值C和D作为多个第二图像块的第二平均值;
基于多个第三图像块的第一参数进行升序排序所得的队列,计算该队列中排序最前的预设个数个第三图像块的第一参数的平均值E,且基于多个第三图像块的第二参数进行降序排序所得的队列,计算该队列中排序最前的预设个数个第三图像块的第二参数的平均值F,将上述两个平均值E和F作为多个第三图像块的第二平均值。
在一种实现方式中,上述预设个数可以是所有排序后的图像块的个数的前10%。上述的第二平均值包括排序最前的预设个数的第一参数的平均值,以及排序最前的预设个数的第二参数的平均值。
步骤S158,分别根据多个第一图像块的第一平均值和多个第一图像块的第二平均值,多个第二图像块的第一平均值和多个第二图像块的第二平均值,以及多个第三图像块的第一平均值和多个第三图像块的第二平均值,得到瞳孔边缘区域图像的形状参量,采样图像的形状参量,以及二次采样图像的形状参量。
其中,上述步骤S158,可以包括:分别根据多个第一图像块的第一平均 值和多个第一图像块的第二平均值,得到瞳孔边缘区域图像的形状参量;分别根据多个第二图像块的第一平均值和多个第二图像块的第二平均值,得到采样图像的形状参量;分别根据多个第三图像块的第一平均值和多个第三图像块的第二平均值,得到二次采样图像的形状参量。
在一种实现方式中,可以对每个图像子块利用上述的广义高斯参数模型拟合,得到第一特征(γi,ji,j),包括第一参数γi,j和第二参数ρi,j,其中
Figure PCTCN2017102265-appb-000022
M2、N2为瞳孔边缘区域图像或其下采样图像(即采样图像或者二次采样图像)的高和宽,m为其分块边长。对瞳孔边缘区域图像、或采样图像或二次采样图像的所有子块的特征进行统计,
Figure PCTCN2017102265-appb-000023
为所有子块第一特征γi,j的平均值,即上述的第一参数的平均值,
Figure PCTCN2017102265-appb-000024
为所有子块第二特征ρi,j的平均值,即上述的第二参数的平均值,得到第一平均值
Figure PCTCN2017102265-appb-000025
Figure PCTCN2017102265-appb-000026
为所有子块第一特征γi,j进行升序排列后前10%的子块第一特征γi,j的平均值,
Figure PCTCN2017102265-appb-000027
为所有子块第二特征ρi,j进行降序排列后前10%的子块第二特征ρi,j的平均值,得到第二平均值
Figure PCTCN2017102265-appb-000028
其中,箭头向上表示升序,箭头向下表示降序,从而得到形状参数向量
Figure PCTCN2017102265-appb-000029
在一种实现方式中,在本申请上述实施例中,步骤S1486,分别对分区后的每个第一图像块,分区后的每个第二图像块和分区后的每个第三图像块进行特征提取,得到瞳孔边缘区域图像的频率域方向特征,采样图像的频率域方向特征,以及二次采样图像的频率域方向特征,可以包括:
步骤S171,利用广义高斯分布,对每个第一图像块的每个分区,每个第二图像块的每个分区,以及每个第三图像块的每个分区进行拟合,得到每个第一图像块的每个分区的概率密度,每个第二图像块的每个分区的概率密度,以及每个第三图像块的每个分区的概率密度。
其中,上述步骤S171,可以包括:利用广义高斯分布,对每个第一图像块的每个分区进行拟合,得到每个第一图像块的每个分区的概率密度;且利用广义高斯分布,对每个第二图像块的每个分区进行拟合,得到每个第二图像块的每个分区的概率密度;且利用广义高斯分布,对每个第三图像块的每个分区进行拟合,得到每个第三图像块的每个分区的概率密度。
步骤S172,分别计算每个第一图像块的多个分区的概率密度,每个第二 图像块的多个分区的概率密度,以及每个第三图像块的多个分区的概率密度的方差,得到每个第一图像块的第二特征,每个第二图像块的第二特征,以及每个第三图像块的第二特征。
其中,上述步骤S172,可以包括:分别计算每个第一图像块的多个分区的概率密度的方差,作为每个第一图像块的第二特征;且分别计算每个第二图像块的多个分区的概率密度的方差,作为每个第二图像块的第二特征;且分别计算每个第三图像块的多个分区的概率密度的方差,作为每个第三图像块的第二特征。
步骤S173,分别计算多个第一图像块的第二特征,多个第二图像块的第二特征,以及多个第三图像块的第二特征的平均值,得到多个第一图像块的第三平均值,多个第二图像块的第三平均值,以及多个第三图像块的第三平均值。
其中,上述步骤S173,可以包括:分别计算多个第一图像块的第二特征的平均值,作为多个第一图像块的第三平均值;且分别计算多个第二图像块的第二特征的平均值,作为多个第二图像块的第三平均值;且分别计算多个第三图像块的第二特征的平均值,作为多个第三图像块的第三平均值。
步骤S174,对多个第一图像块的第二特征,多个第二图像块的第二特征,以及多个第三图像块的第二特征进行降序排序。
其中,上述步骤S174,可以包括:对多个第一图像块的第二特征进行降序排序;且对多个第二图像块的第二特征进行降序排序;且对多个第三图像块的第二特征进行降序排序。
步骤S175,分别计算排序最前的预设个数的第一图像块的第二特征,排序最前的预设个数的第二图像块的第二特征,以及排序最前的预设个数的第三图像块的第二特征的平均值,得到多个第一图像块的第四平均值,多个第二图像块的第四平均值,以及多个第三图像块的第四平均值。
其中,上述步骤S175,可以包括:计算排序最前的预设个数的第一图像块的第二特征的平均值,作为多个第一图像块的第四平均值;且计算排序最前的预设个数的第二图像块的第二特征的平均值,作为多个第二图像块的第四平均值;且计算排序最前的预设个数的第三图像块的第二特征的平均值, 作为多个第三图像块的第四平均值。
在一种实现方式中,上述预设个数可以是所有排序后的图像块的个数的前10%。
步骤S176,分别根据多个第一图像块的第三平均值和多个第一图像块的第四平均值,多个第二图像块的第三平均值和多个第二图像块的第四平均值,以及多个第三图像块的第三平均值和多个第三图像块的第四平均值,得到瞳孔边缘区域图像的频率域方向特征,采样图像的频率域方向特征,以及二次采样图像的频率域方向特征。
其中,上述步骤S176,可以包括:根据多个第一图像块的第三平均值和多个第一图像块的第四平均值,得到瞳孔边缘区域图像的频率域方向特征;且根据多个第二图像块的第三平均值和多个第二图像块的第四平均值,得到采样图像的频率域方向特征;且根据多个第三图像块的第三平均值和多个第三图像块的第四平均值,得到二次采样图像的频率域方向特征。
在一种实现方式中,可以在将每个图像块划分为三个分区之后,对三个部分分别进行广义高斯模型拟合,得到ρi,j,1i,j,2i,j,3,即上述的每个分区的概率密度,对ρi,j,1i,j,2i,j,3求方差得到σi,j 2,即上述的第二特征。计算所有子块的特征σi,j 2,然后取所有子块的特征σi,j 2的平均值,以及降序排列后前10%的子块的特征σi,j 2的平均值,得到第三平均值
Figure PCTCN2017102265-appb-000030
和第四平均值
Figure PCTCN2017102265-appb-000031
从而得到频率域方向特征向量
Figure PCTCN2017102265-appb-000032
在一种实现方式中,在本申请上述实施例中,步骤S1488,对多个第一图像块,多个第二图像块和多个第三图像块进行特征提取,得到瞳孔边缘区域图像的频率域能量特征,采样图像的频率域能量特征,以及二次采样图像的频率域能量特征,可以包括:
步骤S181,分别对每个第一图像块,每个第二图像块和每个第三图像块沿反对角线方向进行能量提取,得到每个第一图像块的多个能量,每个第二图像块的多个能量,以及每个第三图像块的多个能量。
其中,上述步骤S181,可以包括:对每个第一图像块进行能量提取,得到每个第一图像块的多个能量;且对,每个第二图像块进行能量提取,得到每个第二图像块的多个能量;且对每个第三图像块沿反对角线方向进行能量 提取,得到每个第三图像块的多个能量。
在一种实现方式中,如图10所示,可以沿着反对角线的方向,以30度、60度和90度进行分区,可以分成三个能量分区,即图10中的第一能量分区,第二能量分区和第三能量分区,提取低频、中频、高频能量Ei,j,1,Ei,j,2,Ei,j,3,其中,
Figure PCTCN2017102265-appb-000033
步骤S182,分别计算每个第一图像块的多个能量,每个第二图像块的多个能量,以及每个第三图像块的多个能量的差值,得到每个第一图像块的多个能量差,每个第二图像块的多个能量差,以及每个第三图像块的每个分区的多个能量差。
其中,上述步骤S182,可以包括:分别计算每个第一图像块的多个能量的差值,得到每个第一图像块的多个能量差;且分别计算每个第二图像块的多个能量的差值,得到每个第二图像块的多个能量差;且分别计算每个第三图像块的每个分区的多个能量的差值,得到每个第三图像块的多个能量差。
步骤S183,分别计算每个第一图像块的多个能量差,每个第二图像块的多个能量差,以及每个第三图像块的多个能量差的平均值,得到每个第一图像块的能量特征,每个第二图像块的能量特征,以及每个第三图像块的能量特征。
其中,上述步骤S183,可以包括:分别计算每个第一图像块的多个能量差的平均值,作为每个第一图像块的能量特征;分别计算每个第二图像块的多个能量差的平均值,作为每个第二图像块的能量特征;分别计算每个第三图像块的多个能量差的平均值,作为每个第三图像块的能量特征。
步骤S184,分别计算多个第一图像块的能量特征,多个第二图像块的能量特征,以及多个第三图像块的能量特征的平均值,得到多个第一图像块的第五平均值,多个第二图像块的第五平均值,以及多个第三图像块的第五平均值。
其中,上述步骤S184,可以包括:分别计算多个第一图像块的能量特征的平均值,作为多个第一图像块的第五平均值;分别计算多个第二图像块的能量特征的平均值,作为多个第二图像块的第五平均值;分别计算多个第三图像块的能量特征的平均值,作为多个第三图像块的第五平均值。
步骤S185,对每个第一图像块的能量特征,每个第二图像块的能量特征,以及每个第三图像块的能量特征进行排序。
其中,上述步骤S185,可以包括:对每个第一图像块的能量特征进行排序;对每个第二图像块的能量特征进行排序;对每个第三图像块的能量特征进行排序。
其中,上述排序方式可以为升序排序。
步骤S186,分别计算排序最高的预设个数的第一图像块的能量特征,排序最高的预设个数的第二图像块的能量特征,以及排序最高的预设个数的多个第三图像块的能量特征的平均值,得到多个第一图像块的第六平均值,多个第二图像块的第六平均值,以及多个第三图像块的第六平均值。
其中,上述步骤S186,可以包括:计算排序最高的预设个数的第一图像块的能量特征的平均值,作为多个第一图像块的第六平均值;计算排序最高的预设个数的第二图像块的能量特征的平均值,作为多个第二图像块的第六平均值;计算排序最高的预设个数的多个第三图像块的能量特征的平均值,作为多个第三图像块的第六平均值。
在一种实现方式中,上述预设个数可以是所有排序后的图像块的个数的前10%。
步骤S187,分别根据多个第一图像块的第五平均值和多个第一图像块的第六平均值,多个第二图像块的第五平均值和多个第二图像块的第六平均值,以及多个第三图像块的第五平均值和多个第三图像块的第六平均值,得到瞳孔边缘区域图像的频率域能量特征,采样图像的频率域能量特征,以及二次采样图像的频率域能量特征。
其中,上述步骤S186,可以包括:根据多个第一图像块的第五平均值和多个第一图像块的第六平均值,得到瞳孔边缘区域图像的频率域能量特征;根据多个第二图像块的第五平均值和多个第二图像块的第六平均值,得到采样图像的频率域能量特征;根据多个第三图像块的第五平均值和多个第三图像块的第六平均值,得到二次采样图像的频率域能量特征。
在一种实现方式中,在提取到每个图像块的低频、中频、高频能量 Ei,j,1,Ei,j,2,Ei,j,3之后,可以通过如下公式计算得到能量差γi,j,1,γi,j,2
Figure PCTCN2017102265-appb-000034
然后取γi,j,1和γi,j,2的平均值得到各个子块γi,j,即上述的能量特征,对所有子块的特征γi,j取平均值,以及对升序排列后前10%的子块的特征γi,j取平均值,得到第五平均值
Figure PCTCN2017102265-appb-000035
和第六平均值
Figure PCTCN2017102265-appb-000036
从而得到频率域能量特征向量
Figure PCTCN2017102265-appb-000037
此处可以理解的是,通过上述方法可以对瞳孔边缘区域图像提取到如下频率域特征,即上述的第二特征集:
Figure PCTCN2017102265-appb-000038
为瞳孔边缘区域的频率域特征,即上述的瞳孔边缘区域图像的第二特征集;
Figure PCTCN2017102265-appb-000039
为瞳孔边缘区域首次下采样的频率域特征,即上述的采样图像的第二特征集;
Figure PCTCN2017102265-appb-000040
为瞳孔边缘区域第二次下采样图像提取到的频率域特征,即上述的二次采样图像的第二特征集。
在一种实现方式中,在本申请上述实施例中,步骤S108,对第一特征集和第二特征集进行特征筛选可以包括:
步骤S1082,利用压缩估计,对第一特征集和第二特征集进行筛选,得到虹膜图像的特征集。
在一种实现方式中,上述压缩估计可以是Lasso,Least Absolute Shrinkage and Selection Operator。Lasso是一种压缩估计方法,其基本思想是在回归系数的绝对值之和小于一个常数的约束条件下,使残差平方和达到最小值的回归系数的估计。根据产生的某些严格等于0的回归系数选择特征,达到降维的目的。其多元线性模型定义为:y=Xα+ε,其中,y=(y1,y2,…,yn)T为分类标签,X=(x1,x2,…,xn)为特征集,xj=(x1j,x2j,…,xnj)T,j=1,2,…,d,α是估计参数,并且dim(α)=d,ε为误差项。建模时,我们通常希望保留X中的重要变量,然后将其他变量置为0,即:α=argminα||y-Xαi||2,其中,
Figure PCTCN2017102265-appb-000041
Figure PCTCN2017102265-appb-000042
在自建数据库中,Lasso特征选择结果如表1所示。
表1
Figure PCTCN2017102265-appb-000043
在一种实现方式中,为了精简特征集、降低时间复杂度,可以使用Lasso对得到的32维特征集,即上述的第一特征集和第二特征集,进行特征选择。实际的特征选择结果会根据样本的不同而改变。对于该数据库,为了兼顾时间效率和准确度,最终选择以下特征:
SpatROI1′=(BR1,Z*R1)是左侧虹膜区域的特征向量,SpatROI2′=(γR2,BR2,Z*R2)是右侧虹膜区域的特征向量,即上述的虹膜区域图像的第一特征集;
Figure PCTCN2017102265-appb-000044
是瞳孔边缘区域的特征 向量,即上述的瞳孔边缘区域图像的第二特征集;
Figure PCTCN2017102265-appb-000045
是下采样图像的特征向量,即上述的采样图像的第二特征集;
Figure PCTCN2017102265-appb-000046
是二次采样图像的特征向量,即上述的二次采样图像的第二特征集。
经过Lasso特征选择后,最终从32维指标集中保留21维特征向量组成特征集LFSF=(SpatROI1′,SpatROI2′,FreqROI3′,Freqdown1′,Freqdown1′)。
在一种实现方式中,在本申请上述实施例中,步骤S108,对第一特征集和第二特征集进行检测,得到检测结果,可以包括:
步骤S1084,利用预设分类器,对虹膜图像的特征集进行分类,得到待检测的虹膜图像的分类结果。
本申请实施例中,上述预设分类器可以是SVM(Support Vector Machine,支持向量机)分类器、AdaBoost分类器或联合贝叶斯分类器等,均可以对特征进行分类的分类器。在本申请中以线性核函数(C-SVC)的SVM分类器为例,进行详细说明。
步骤S1086,根据待检测的虹膜图像的分类结果,得到检测结果。
本申请实施例中,可以使用线性核函数(C-SVC)的SVM分类器对数据库样本进行分类,以Lasso特征选择得到的最终特征集作为SVM的输入样本,识别问题为二分类问题,清晰图像(+1)和模糊图像(-1)。最终选取合适的惩罚因子进行训练,得到训练后的SVM分类器。利用训练后的SVM分类器对待定图像进行0-1分类,判定为0的图像直接过滤,即为模糊图像;判定为1的图像为清晰图像。
实施例2
根据本申请实施例,提供了一种虹膜图像的检测装置的装置实施例。
图12是根据本申请实施例的一种虹膜图像的检测装置的示意图,如图12所示,该装置包括:
获取模块121,用于获取待检测的虹膜图像。
具体的,上述的虹膜可以包括瞳孔、虹膜、巩膜、眼皮、睫毛,即上述虹膜图像可以为人眼区域图像。
在一种实现方式中,为了对虹膜图像进行模糊检测,可以采集待检测的灰度虹膜图像。
也就是说,为了对虹膜图像进行模糊检测,可以上述待检测的虹膜图像可以为灰度图像,本申请实施例中可以称之为灰度虹膜图像。
确定模块123,用于从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像,其中,虹膜区域图像可以用于表征虹膜,瞳孔边缘区域图像可以用于表征虹膜的边缘。
具体的,上述的虹膜区域图像可以是虹膜图像中的虹膜区域所在图像,瞳孔边缘区域图像可以是虹膜图像中的瞳孔边缘区域所在图像,即虹膜内边缘区域所在图像,可以包括虹膜区域所在图像和瞳孔区域所在图像。图像的锐变边缘是最容易受到模糊影响的区域,在虹膜图像中,最明显的锐变边缘就是瞳孔边缘,并且该区域不易受到噪声影响,所以,理想环境下,瞳孔边缘是最有利于判断虹膜图像模糊与否的图像信息。
也就是说,在理想环境下,虹膜图像中瞳孔边缘所在区域中所包含的图像信息为:最有利于判断虹膜图像模糊与否的图像信息。
在一种实现方式中,在获取到灰度虹膜图像之后,可以从虹膜图像中选择瞳孔边缘的附近区域作为一个感兴趣区域(Region Of Interest,ROI),为了将瞳孔边缘清晰度不明显的图像也能够判断出来,还可以选择虹膜区域作为另一个感兴趣区域,得到虹膜区域图像和瞳孔边缘区域图像。
提取模块125,用于对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集。
在一种实现方式中,可以采用多种特征提取方法,对两个ROI提取多种特征,例如,可以提取虹膜区域的空间域特征和瞳孔边缘区域的频率域特征,得到用于评价虹膜图像模糊程度的特征集,即上述的第一特征集和第二特征集。
检测模块127,用于对第一特征集和第二特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。
在一种实现方式中,在采用多种特征提取方法,提取到第一特征集和第二特征集之后,可以对提取到的第一特征集和第二特征集进行特征筛选,得 到最终特征集,并根据最终特征集进行检测,检测采集到的虹膜图像是否清晰,从而得到检测结果。
通过本申请上述实施例,可以获取待检测的虹膜图像,从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像,对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集,并对第一特征集和第二特征集进行检测,得到检测结果,从而实现虹膜图像的模糊检测。容易注意到的是,由于同时确定了虹膜区域图像和瞳孔边缘区域图像,并从两个区域图像中提取到第一特征集和第二特征集,使得特征集表征更加全面,提升检测精度,进一步,在提取到第一特征集和第二特征集之后,对两个特征集进行特征筛选,不仅使特征集精简加快检测速度,而且避免了特征信息冗余,提高了准确度,进而解决了现有技术中的虹膜图像的模糊检测方法,检测精度低的技术问题。因此,通过本申请上述实施例,可以通过多区域多指标的方法进行检测,达到提升系统性能和鲁棒性,使得系统快速友好的采集到高质量虹膜图像的效果。
在一种实现方式中,在本申请上述实施例中,确定模块123可以包括:
定位模块,用于对虹膜图像进行定位,得到瞳孔的半径和圆心坐标;
第一处理模块,用于根据半径,圆心坐标和第一预设范围,得到第一待定区域图像和第二待定区域图像,并根据半径,圆心坐标和第二预设范围,得到第三待定区域图像和第四待定区域图像,其中,所述第一待定区域图像和所述第二待定区域图像位于虹膜区域,所述第三待定区域图像和所述第四待定区域图像位于瞳孔边缘区域;
第二处理模块,用于从第一待定区域图像和第二待定区域图像中获取满足第一预设条件的区域图像,得到虹膜区域图像,并从第三待定区域图像和第四待定区域图像中获取满足第二预设条件的区域图像,得到瞳孔边缘区域图像。
在一种实现方式中,在本申请上述实施例中,第一处理模块可以包括:
第一判断子模块,用于判断第一待定区域图像和第二待定区域图像是否含有噪声;
第一处理子模块,用于如果第一待定区域图像和第二待定区域图像均含 有噪声,或者第一待定区域图像和第二待定区域图像均不含噪声,则将第一待定区域图像和第二待定区域图像作为虹膜区域图像;
第二处理子模块,用于如果第一待定区域图像含有噪声,且第二待定区域图像不含噪声,则将第一待定区域图像替换为第二待定区域图像;
第三处理子模块,用于如果第一待定区域图像不含噪声,且第二待定区域图像含有噪声,则将第二待定区域图像替换为第一待定区域图像。
在一种实现方式中,在本申请上述实施例中,第二处理模块可以包括:
第二判断子模块,用于判断第三待定区域图像是否含有光斑噪声;
第四处理子模块,用于如果第三待定区域图像含有光斑噪声则将第四待定区域图像作为瞳孔边缘区域图像;
第五处理子模块,用于如果第三待定区域图像不含光斑噪声则将第三待定区域图像作为瞳孔边缘区域图像。
在一种实现方式中,在本申请上述实施例中,提取模块125可以包括:
第一计算模块,用于计算虹膜区域图像的去均值对比度归一化系数,并利用广义高斯分布,对去均值对比度归一化系数进行拟合,得到特征向量;
第二计算模块,用于计算虹膜区域图像的水平方向和竖直方向的差分信号矩阵,并对水平方向和竖直方向的差分信号矩阵进行分块处理,得到子特征集,其中,子特征集至少包括:差分信号整体活跃度,局部分块活跃度和低强度信号个数;
第三处理模块,用于根据特征向量和子特征集,得到第一特征集。
在一种实现方式中,在本申请上述实施例中,第二计算模块可以包括:
第六处理子模块,用于按照水平预设像素和竖直预设像素,分别对水平方向和竖直方向的差分信号矩阵进行分块处理,得到多个分块;
第一计算子模块,用于计算每个分块的块边界平均梯度,得到水平方向的整体活跃度和竖直方向的整体活跃度,并计算水平方向的整体活跃度和竖直方向的整体活跃度的平均值,得到差分信号整体活跃度;
第一提取子模块,用于提取每个分块的块内平均差分的绝对值,得到水平方向的局部分块活跃度和竖直方向的局部分块活跃度,并计算水平方向的局部分块活跃度和竖直方向的局部分块活跃度的平均值,得到局部分块活跃 度;
获取子模块,用于分别从水平方向和竖直方向的差分信号矩阵中,获取差分信号小于预设值的个数,得到水平方向的低强度信号个数和竖直方向的低强度信号个数,并计算水平方向的低强度信号个数和竖直方向的低强度信号个数的平均值,得到低强度信号个数。
在一种实现方式中,在本申请上述实施例中,提取模块125可以包括:
采样模块,用于对瞳孔边缘区域图像进行两次下采样,得到采样图像和二次采样图像;
分块模块,用于分别对瞳孔边缘区域图像,采样图像和二次采样图像进行分块,得到多个第一图像块,多个第二图像块和多个第三图像块;
转换模块,用于分别对每个第一图像块,每个第二图像块和每个第三图像块进行离散余弦转换,得到处理后的多个第一图像块,处理后的多个第二图像块和处理后的多个第三图像块;
第四处理模块,用于分别对处理后的多个第一图像块,处理后的多个第二图像块和处理后的多个第三图像块进行特征提取,得到瞳孔边缘区域图像的第二特征集,采样图像的第二特征集和二次采样图像的第二特征集,其中,第二特征集至少包括:形状参量,频率域方向特征和频率域能量特征。
在一种实现方式中,在本申请上述实施例中,采样模块可以包括:
第一采样子模块,用于利用第一低通滤波器对瞳孔边缘区域图像进行滤波,并对滤波后的瞳孔边缘区域图像进行下采样,得到采样图像;
第二采样子模块,用于利用第二低通滤波器对采样图像进行滤波,并对滤波后的采样图像进行下采样,得到二次采样图像。
在一种实现方式中,在本申请上述实施例中,分块模块可以包括:
第一分块子模块,用于按照第一预设分块大小,对瞳孔边缘区域图像进行分块处理,得到多个第一图像块;
第二分块子模块,用于按照第二预设分块大小,对采样图像进行分块处理,得到多个第二图像块;
第三分块子模块,用于按照第三预设分块大小,对二次采样图像进行分块处理,得到多个第三图像块。
在一种实现方式中,在本申请上述实施例中,第四处理模块可以包括:
第二提取子模块,用于分别对多个第一图像块,多个第二图像块和多个第三图像块进行特征提取,得到瞳孔边缘区域图像的形状参量,采样图像的形状参量,以及二次采样图像的形状参量;
分区子模块,用于分别将每个第一图像块,每个第二图像块和每个第三图像块沿主对角线方向划分为多个区域,得到每个第一图像块的多个分区,每个第二图像块的多个分区,以及每个第三图像块的多个分区;
第三提取子模块,用于分别对分区后的每个第一图像块,分区后的每个第二图像块和分区后的每个第三图像块进行特征提取,得到瞳孔边缘区域图像的频率域方向特征,采样图像的频率域方向特征,以及二次采样图像的频率域方向特征;
第四提取子模块,用于分别对多个第一图像块,多个第二图像块和多个第三图像块进行特征提取,得到瞳孔边缘区域图像的频率域能量特征,采样图像的频率域能量特征,以及二次采样图像的频率域能量特征。
在一种实现方式中,在本申请上述实施例中,第二提取子模块可以包括:
第一拟合子模块,用于利用广义高斯参数模型,分别对每个第一图像块,每个第二图像块和每个第三图像块进行拟合,得到每个第一图像块的第一特征,每个第二图像块的第一特征,以及每个第三图像块的第一特征,其中,第一特征包括:第一参数和第二参数;
第二计算子模块,用于分别计算多个第一图像块的第一特征,多个第二图像块的第一特征,以及多个第三图像块的第一特征的平均值,得到多个第一图像块的第一平均值,多个第二图像块的第一平均值,以及多个第三图像块的第一平均值;
第一排序子模块,用于分别对多个第一图像块的第一参数,多个第二图像块的第一参数,以及多个第三图像块的第一参数进行升序排序,并对多个第一图像块的第二参数,多个第二图像块的第二参数,以及多个第三图像块的第二参数进行降序排序;
第三计算子模块,用于分别计算排序最前的预设个数的第一图像块的第一特征,排序最前的预设个数的第二图像块的第一特征,以及排序最前的预 设个数的多个第三图像块的第一特征的平均值,得到多个第一图像块的第二平均值,多个第二图像块的第二平均值,以及多个第三图像块的第二平均值;
第七处理子模块,用于分别根据多个第一图像块的第一平均值和多个第一图像块的第二平均值,多个第二图像块的第一平均值和多个第二图像块的第二平均值,以及多个第三图像块的第一平均值和多个第三图像块的第二平均值,得到瞳孔边缘区域图像的形状参量,采样图像的形状参量,以及二次采样图像的形状参量。
在一种实现方式中,在本申请上述实施例中,第三提取子模块可以包括:
第二拟合子模块,用于利用广义高斯分布,对每个第一图像块的每个分区,每个第二图像块的每个分区,以及每个第三图像块的每个分区进行拟合,得到每个第一图像块的每个分区的概率密度,每个第二图像块的每个分区的概率密度,以及每个第三图像块的每个分区的概率密度;
第四计算子模块,用于分别计算每个第一图像块的多个分区的概率密度,每个第二图像块的多个分区的概率密度,以及每个第三图像块的多个分区的概率密度的方差,得到每个第一图像块的第二特征,每个第二图像块的第二特征,以及每个第三图像块的第二特征;
第五计算子模块,用于分别计算多个第一图像块的第二特征,多个第二图像块的第二特征,以及多个第三图像块的第二特征的平均值,得到多个第一图像块的第三平均值,多个第二图像块的第三平均值,以及多个第三图像块的第三平均值;
第二排序子模块,用于对多个第一图像块的第二特征,多个第二图像块的第二特征,以及多个第三图像块的第二特征进行降序排序;
第六计算子模块,用于分别计算排序最前的预设个数的第一图像块的第二特征,排序最前的预设个数的第二图像块的第二特征,以及排序最前的预设个数的第三图像块的第二特征的平均值,得到多个第一图像块的第四平均值,多个第二图像块的第四平均值,以及多个第三图像块的第四平均值;
第八处理子模块,用于分别根据多个第一图像块的第三平均值和多个第一图像块的第四平均值,多个第二图像块的第三平均值和多个第二图像块的第四平均值,以及多个第三图像块的第三平均值和多个第三图像块的第四平 均值,得到瞳孔边缘区域图像的频率域方向特征,采样图像的频率域方向特征,以及二次采样图像的频率域方向特征。
在一种实现方式中,在本申请上述实施例中,第四提取子模块可以包括:
第五提取子模块,用于分别对每个第一图像块,每个第二图像块和每个第三图像块沿反对角线方向进行能量提取,得到每个第一图像块的多个能量,每个第二图像块的多个能量,以及每个第三图像块的每个分区的多个能量;
第七计算子模块,用于分别计算每个第一图像块的多个能量,每个第二图像块的多个能量,以及每个第三图像块的每个分区的多个能量的差值,得到每个第一图像块的多个能量差,每个第二图像块的多个能量差,以及每个第三图像块的每个分区的多个能量差;
第八计算子模块,用于分别计算每个第一图像块的多个能量差,每个第二图像块的多个能量差,以及每个第三图像块的每个分区的多个能量差的平均值,得到每个第一图像块的能量特征,每个第二图像块的能量特征,以及每个第三图像块的能量特征;
第九计算子模块,用于分别计算多个第一图像块的能量特征,多个第二图像块的能量特征,以及多个第三图像块的能量特征的平均值,得到多个第一图像块的第五平均值,多个第二图像块的第五平均值,以及多个第三图像块的第五平均值;
第三排序子模块,用于对每个第一图像块的能量特征,每个第二图像块的能量特征,以及每个第三图像块的能量特征进行排序;
第十计算子模块,用于分别计算排序最高的预设个数的第一图像块的能量特征,排序最高的预设个数的第二图像块的能量特征,以及排序最高的预设个数的多个第三图像块的能量特征的平均值,得到多个第一图像块的第六平均值,多个第二图像块的第六平均值,以及多个第三图像块的第六平均值;
第九处理子模块,用于分别根据多个第一图像块的第五平均值和多个第一图像块的第六平均值,多个第二图像块的第五平均值和多个第二图像块的第六平均值,以及多个第三图像块的第五平均值和多个第三图像块的第六平均值,得到瞳孔边缘区域图像的频率域能量特征,采样图像的频率域能量特征,以及二次采样图像的频率域能量特征。
在一种实现方式中,在本申请上述实施例中,检测模块127可以包括:
筛选模块,用于利用压缩估计,对第一特征集和第二特征集进行筛选,得到虹膜图像的特征集。
在一种实现方式中,在本申请上述实施例中,检测模块127可以包括:
分类模块,用于利用预设分类器,对虹膜图像的特征集进行分类,得到待检测的虹膜图像的分类结果;
第五处理模块,用于根据待检测的虹膜图像的分类结果,得到检测结果。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
相应于上述方法实施例,本申请实施例还提供一种电子设备,如图13所示,包括处理器131、通信接口132、存储器133和通信总线134,其中,处理器131,通信接口132,存储器133通过通信总线134完成相互间的通信;
存储器133,用于存放计算机程序;
处理器131,用于执行存储器133上所存放的计算机程序时,实现本申请实施例所提供的上述任一所述的人脸模型的训练方法,其中,该人脸模型的训练方法可以包括步骤:
从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集;对第一特征集和第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。
应用本申请实施例,该电子设备的处理器运行存储器中存储的计算机程序,以执行本申请实施例所提供的任一人脸模型的训练方法,因此能够实现:虹膜图像的模糊检测。容易注意到的是,由于同时确定了虹膜区域图像和瞳孔边缘区域图像,并从两个区域图像中提取到第一特征集和第二特征集,使得特征集表征更加全面,提升检测精度,进一步,在提取到第一特征集和第二特征集之后,对两个特征集进行特征筛选,不仅使特征集精简加快检测速度,而且避免了特征信息冗余,提高了准确度,进而解决了现有技术中的虹 膜图像的模糊检测方法,检测精度低的技术问题。因此,通过本申请上述实施例,可以通过多区域多指标的方法进行检测,达到提升系统性能和鲁棒性,使得系统快速友好的采集到高质量虹膜图像的效果。
本申请实施例还提供了一种计算机程序,所述计算机程序用于被运行以执行本申请实施例所提供的上述任一所述的虹膜图像的检测方法,其中,该人脸模型的训练方法可以包括步骤:
从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集;对第一特征集和第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。
应用本申请实施例,计算机程序在运行时执行本申请实施例所提供的任一虹膜图像的检测方法,因此能够实现:虹膜图像的模糊检测。容易注意到的是,由于同时确定了虹膜区域图像和瞳孔边缘区域图像,并从两个区域图像中提取到第一特征集和第二特征集,使得特征集表征更加全面,提升检测精度,进一步,在提取到第一特征集和第二特征集之后,对两个特征集进行特征筛选,不仅使特征集精简加快检测速度,而且避免了特征信息冗余,提高了准确度,进而解决了现有技术中的虹膜图像的模糊检测方法,检测精度低的技术问题。因此,通过本申请上述实施例,可以通过多区域多指标的方法进行检测,达到提升系统性能和鲁棒性,使得系统快速友好的采集到高质量虹膜图像的效果。
本申请实施例提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序被运行以执行本申请实施例所提供的上述任一所述的虹膜图像的检测方法,其中,该人脸模型的训练方法可以包括步骤:
从虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;对虹膜区域图像进行空间域特征提取,得到第一特征集,并对瞳孔边缘区域图像进行频率域特征提取,得到第二特征集;对第一特征集和第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,检测结果用于表征虹膜图像是否清晰。
应用本申请实施例,存储介质存储有在运行时执行本申请实施例所提供的任一虹膜图像的检测方法的计算机程序,因此能够实现:虹膜图像的模糊检测。容易注意到的是,由于同时确定了虹膜区域图像和瞳孔边缘区域图像,并从两个区域图像中提取到第一特征集和第二特征集,使得特征集表征更加全面,提升检测精度,进一步,在提取到第一特征集和第二特征集之后,对两个特征集进行特征筛选,不仅使特征集精简加快检测速度,而且避免了特征信息冗余,提高了准确度,进而解决了现有技术中的虹膜图像的模糊检测方法,检测精度低的技术问题。因此,通过本申请上述实施例,可以通过多区域多指标的方法进行检测,达到提升系统性能和鲁棒性,使得系统快速友好的采集到高质量虹膜图像的效果。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个 存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (16)

  1. 一种虹膜图像的检测方法,其特征在于,包括:
    获取待检测的虹膜图像;
    从所述虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;
    对所述虹膜区域图像进行空间域特征提取,得到第一特征集,并对所述瞳孔边缘区域图像进行频率域特征提取,得到第二特征集;
    对所述第一特征集和所述第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,所述检测结果用于表征所述虹膜图像是否清晰。
  2. 根据权利要求1所述的方法,其特征在于,从所述虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像,包括:
    对所述虹膜图像进行定位,得到瞳孔的半径和圆心坐标;
    根据所述半径,所述圆心坐标和第一预设范围,得到第一待定区域图像和第二待定区域图像,并根据所述半径,所述圆心坐标和第二预设范围,得到第三待定区域图像和第四待定区域图像,其中,所述第一待定区域图像和所述第二待定区域图像位于虹膜区域,所述第三待定区域图像和所述第四待定区域图像位于瞳孔边缘区域;
    从所述第一待定区域图像和所述第二待定区域图像中获取满足第一预设条件的区域图像,得到所述虹膜区域图像,并从所述第三待定区域图像和所述第四待定区域图像中获取满足第二预设条件的区域图像,得到所述瞳孔边缘区域图像。
  3. 根据权利要求2所述的方法,其特征在于,从所述第一待定区域图像和所述第二待定区域图像中获取满足第一预设条件的区域图像,得到所述虹膜区域图像,包括:
    判断所述第一待定区域图像和所述第二待定区域图像是否含有噪声;
    如果所述第一待定区域图像和所述第二待定区域图像均含有所述噪声,或者所述第一待定区域图像和所述第二待定区域图像均不含所述噪声,则将 所述第一待定区域图像和所述第二待定区域图像作为所述虹膜区域图像;
    如果所述第一待定区域图像含有所述噪声,且所述第二待定区域图像不含所述噪声,则将所述第一待定区域图像替换为所述第二待定区域图像;
    如果所述第一待定区域图像不含所述噪声,且所述第二待定区域图像含有所述噪声,则将所述第二待定区域图像替换为所述第一待定区域图像。
  4. 根据权利要求2所述的方法,其特征在于,从所述第三待定区域图像和所述第四待定区域图像中获取满足第二预设条件的区域图像,得到所述瞳孔边缘区域图像,包括:
    判断所述第三待定区域图像是否含有光斑噪声;
    如果所述第三待定区域图像含有所述光斑噪声,则将所述第四待定区域图像作为所述瞳孔边缘区域图像;
    如果所述第三待定区域图像不含所述光斑噪声,则将所述第三待定区域图像作为所述瞳孔边缘区域图像。
  5. 根据权利要求1所述的方法,其特征在于,对所述虹膜区域图像进行空间域特征提取,得到第一特征集,包括:
    计算所述虹膜区域图像的去均值对比度归一化系数,并利用广义高斯分布,对所述去均值对比度归一化系数进行拟合,得到特征向量;
    计算所述虹膜区域图像的水平方向和竖直方向的差分信号矩阵,并对所述水平方向和竖直方向的差分信号矩阵进行分块处理,得到子特征集,其中,所述子特征集至少包括:差分信号整体活跃度,局部分块活跃度和低强度信号个数;
    根据所述特征向量和所述子特征集,得到所述第一特征集。
  6. 根据权利要求5所述的方法,其特征在于,对所述水平方向和竖直方向的差分信号矩阵进行分块处理,得到子特征集,包括:
    按照水平预设像素和竖直预设像素,分别对所述水平方向和竖直方向的差分信号矩阵进行分块处理,得到多个分块;
    计算每个分块的块边界平均梯度,得到水平方向的整体活跃度和竖直方向的整体活跃度,并计算所述水平方向的整体活跃度和所述竖直方向的整体活跃度的平均值,得到所述差分信号整体活跃度;
    提取所述每个分块的块内平均差分的绝对值,得到水平方向的局部分块活跃度和竖直方向的局部分块活跃度,并计算所述水平方向的局部分块活跃度和所述竖直方向的局部分块活跃度的平均值,得到所述局部分块活跃度;
    分别从所述水平方向和竖直方向的差分信号矩阵中,获取差分信号小于预设值的个数,得到水平方向的低强度信号个数和竖直方向的低强度信号个数,并计算所述水平方向的低强度信号个数和所述竖直方向的低强度信号个数的平均值,得到所述低强度信号个数。
  7. 根据权利要求1所述的方法,其特征在于,对所述瞳孔边缘区域图像进行频率域特征提取,得到第二特征集,包括:
    对所述瞳孔边缘区域图像进行两次下采样,得到采样图像和二次采样图像;
    分别对所述瞳孔边缘区域图像,所述采样图像和所述二次采样图像进行分块,得到多个第一图像块,多个第二图像块和多个第三图像块;
    分别对每个第一图像块,每个第二图像块和每个第三图像块进行离散余弦转换,得到处理后的多个第一图像块,处理后的多个第二图像块和处理后的多个第三图像块;
    分别对所述处理后的多个第一图像块,所述处理后的多个第二图像块和所述处理后的多个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的第二特征集,所述采样图像的第二特征集和所述二次采样图像的第二特征集,其中,所述第二特征集至少包括:形状参量,频率域方向特征和频率域能量特征。
  8. 根据权利要求7所述的方法,其特征在于,分别对所述处理后的多个第一图像块,所述处理后的多个第二图像块和所述处理后的多个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的第二特征集,所述采样图像的第二特征集和所述二次采样图像的第二特征集,包括:
    分别对所述多个第一图像块,所述多个第二图像块和所述多个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的形状参量,所述采样图像的形状参量,以及所述二次采样图像的形状参量;
    分别将所述每个第一图像块,所述每个第二图像块和所述每个第三图像块沿主对角线方向划分为多个区域,得到所述每个第一图像块的多个分区,所述每个第二图像块的多个分区,以及所述每个第三图像块的多个分区;
    分别对分区后的每个第一图像块,分区后的每个第二图像块和分区后的每个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的频率域方向特征,所述采样图像的频率域方向特征,以及所述二次采样图像的频率域方向特征;
    分别对所述多个第一图像块,所述多个第二图像块和所述多个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的频率域能量特征,所述采样图像的频率域能量特征,以及所述二次采样图像的频率域能量特征。
  9. 根据权利要求8所述的方法,其特征在于,分别对所述多个第一图像块,所述多个第二图像块和所述多个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的形状参量,所述采样图像的形状参量,以及所述二次采样图像的形状参量,包括:
    利用广义高斯参数模型,分别对所述每个第一图像块,所述每个第二图像块和所述每个第三图像块进行拟合,得到所述每个第一图像块的第一特征,所述每个第二图像块的第一特征,以及所述每个第三图像块的第一特征,其中,所述第一特征包括:第一参数和第二参数;
    分别计算所述多个第一图像块的第一特征,所述多个第二图像块的第一特征,以及所述多个第三图像块的第一特征的平均值,得到所述多个第一图像块的第一平均值,所述多个第二图像块的第一平均值,以及所述多个第三图像块的第一平均值;
    分别对所述多个第一图像块的第一参数,所述多个第二图像块的第一参数,以及所述多个第三图像块的第一参数进行升序排序,并对所述多个第一图像块的第二参数,所述多个第二图像块的第二参数,以及所述多个第三图 像块的第二参数进行降序排序;
    分别计算排序最前的预设个数的第一图像块的第一特征,排序最前的预设个数的第二图像块的第一特征,以及排序最前的预设个数的多个第三图像块的第一特征的平均值,得到所述多个第一图像块的第二平均值,所述多个第二图像块的第二平均值,以及所述多个第三图像块的第二平均值;
    分别根据所述多个第一图像块的第一平均值和所述多个第一图像块的第二平均值,所述多个第二图像块的第一平均值和所述多个第二图像块的第二平均值,以及所述多个第三图像块的第一平均值和所述多个第三图像块的第二平均值,得到所述瞳孔边缘区域图像的形状参量,所述采样图像的形状参量,以及所述二次采样图像的形状参量。
  10. 根据权利要求8所述的方法,其特征在于,分别对分区后的每个第一图像块,分区后的每个第二图像块和分区后的每个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的频率域方向特征,所述采样图像的频率域方向特征,以及所述二次采样图像的频率域方向特征,包括:
    利用广义高斯分布,对所述每个第一图像块的每个分区,所述每个第二图像块的每个分区,以及所述每个第三图像块的每个分区进行拟合,得到所述每个第一图像块的每个分区的概率密度,所述每个第二图像块的每个分区的概率密度,以及所述每个第三图像块的每个分区的概率密度;
    分别计算所述每个第一图像块的多个分区的概率密度,所述每个第二图像块的多个分区的概率密度,以及所述每个第三图像块的多个分区的概率密度的方差,得到所述每个第一图像块的第二特征,所述每个第二图像块的第二特征,以及所述每个第三图像块的第二特征;
    分别计算所述多个第一图像块的第二特征,所述多个第二图像块的第二特征,以及所述多个第三图像块的第二特征的平均值,得到所述多个第一图像块的第三平均值,所述多个第二图像块的第三平均值,以及所述多个第三图像块的第三平均值;
    对所述多个第一图像块的第二特征,所述多个第二图像块的第二特征,以及所述多个第三图像块的第二特征进行降序排序;
    分别计算排序最前的预设个数的第一图像块的第二特征,排序最前的预设个数的第二图像块的第二特征,以及排序最前的预设个数的第三图像块的第二特征的平均值,得到所述多个第一图像块的第四平均值,所述多个第二图像块的第四平均值,以及所述多个第三图像块的第四平均值;
    分别根据所述多个第一图像块的第三平均值和所述多个第一图像块的第四平均值,所述多个第二图像块的第三平均值和所述多个第二图像块的第四平均值,以及所述多个第三图像块的第三平均值和所述多个第三图像块的第四平均值,得到所述瞳孔边缘区域图像的频率域方向特征,所述采样图像的频率域方向特征,以及所述二次采样图像的频率域方向特征。
  11. 根据权利要求8所述的方法,其特征在于,对所述多个第一图像块,所述多个第二图像块和所述多个第三图像块进行特征提取,得到所述瞳孔边缘区域图像的频率域能量特征,所述采样图像的频率域能量特征,以及所述二次采样图像的频率域能量特征,包括:
    分别对所述每个第一图像块,所述每个第二图像块和所述每个第三图像块沿反对角线方向进行能量提取,得到所述每个第一图像块的多个能量,所述每个第二图像块的多个能量,以及所述每个第三图像块的多个能量;
    分别计算所述每个第一图像块的多个能量,所述每个第二图像块的多个能量,以及所述每个第三图像块的每个分区的多个能量的差值,得到所述每个第一图像块的多个能量差,所述每个第二图像块的多个能量差,以及所述每个第三图像块的多个能量差;
    分别计算所述每个第一图像块的多个能量差,所述每个第二图像块的多个能量差,以及所述每个第三图像块的多个能量差的平均值,得到所述每个第一图像块的能量特征,所述每个第二图像块的能量特征,以及所述每个第三图像块的能量特征;
    分别计算所述多个第一图像块的能量特征,所述多个第二图像块的能量特征,以及所述多个第三图像块的能量特征的平均值,得到所述多个第一图像块的第五平均值,所述多个第二图像块的第五平均值,以及所述多个第三图像块的第五平均值;
    对所述每个第一图像块的能量特征,所述每个第二图像块的能量特征,以及所述每个第三图像块的能量特征进行排序;
    分别计算排序最高的预设个数的第一图像块的能量特征,排序最高的预设个数的第二图像块的能量特征,以及排序最高的预设个数的多个第三图像块的能量特征的平均值,得到所述多个第一图像块的第六平均值,所述多个第二图像块的第六平均值,以及所述多个第三图像块的第六平均值;
    分别根据所述多个第一图像块的第五平均值和所述多个第一图像块的第六平均值,所述多个第二图像块的第五平均值和所述多个第二图像块的第六平均值,以及所述多个第三图像块的第五平均值和所述多个第三图像块的第六平均值,得到所述瞳孔边缘区域图像的频率域能量特征,所述采样图像的频率域能量特征,以及所述二次采样图像的频率域能量特征。
  12. 根据权利要求1至11中任意一项所述的方法,其特征在于,对所述第一特征集和所述第二特征集进行特征筛选包括:
    利用压缩估计,对所述第一特征集和所述第二特征集进行筛选,得到所述虹膜图像的特征集。
  13. 一种虹膜图像的检测装置,其特征在于,包括:
    获取模块,用于获取待检测的虹膜图像;
    确定模块,用于从所述虹膜图像中,确定虹膜区域图像和瞳孔边缘区域图像;
    提取模块,用于对所述虹膜区域图像进行空间域特征提取,得到第一特征集,并对所述瞳孔边缘区域图像进行频率域特征提取,得到第二特征集;
    检测模块,用于对所述第一特征集和所述第二特征集进行特征筛选,并将筛选后的特征集进行检测,得到检测结果,其中,所述检测结果用于表征所述虹膜图像是否清晰。
  14. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
    存储器,用于存放计算机程序;
    处理器,用于执行存储器上所存放的计算机程序时,实现权利要求1-12任一项所述的虹膜图像的检测方法。
  15. 一种计算机程序,其特征在于,所述计算机程序用于被运行以执行权利要求1-12任一项所述的虹膜图像的检测方法。
  16. 一种存储介质,其特征在于,所述存储介质用于存储计算机程序,所述计算机程序被运行以执行权利要求1-12任一项所述的虹膜图像的检测方法。
PCT/CN2017/102265 2016-09-19 2017-09-19 虹膜图像的检测方法和装置 WO2018050123A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610833796.8A CN107844737B (zh) 2016-09-19 2016-09-19 虹膜图像的检测方法和装置
CN201610833796.8 2016-09-19

Publications (1)

Publication Number Publication Date
WO2018050123A1 true WO2018050123A1 (zh) 2018-03-22

Family

ID=61619349

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/102265 WO2018050123A1 (zh) 2016-09-19 2017-09-19 虹膜图像的检测方法和装置

Country Status (2)

Country Link
CN (1) CN107844737B (zh)
WO (1) WO2018050123A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503397A (zh) * 2023-06-26 2023-07-28 山东天通汽车科技股份有限公司 基于图像数据的车内传输带缺陷检测方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684915B (zh) * 2018-11-12 2021-01-01 温州医科大学 瞳孔跟踪图像处理方法
CN109567600B (zh) * 2018-12-05 2020-12-01 江西书源科技有限公司 家用净水机的配件自动识别方法
CN111339885B (zh) * 2020-02-19 2024-05-28 平安科技(深圳)有限公司 基于虹膜识别的用户身份确定方法及相关装置
CN114764943A (zh) * 2020-12-30 2022-07-19 北京眼神智能科技有限公司 斜视瞳孔定位方法、装置、计算机可读存储介质及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129556A (zh) * 2011-04-14 2011-07-20 电子科技大学 一种虹膜图像清晰度判别方法
CN105139019A (zh) * 2015-03-24 2015-12-09 北京天诚盛业科技有限公司 虹膜图像筛选的方法及装置
CN105160306A (zh) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 虹膜图像模糊判断的方法和装置
CN105447440A (zh) * 2015-03-13 2016-03-30 北京天诚盛业科技有限公司 实时虹膜图像评价方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8873810B2 (en) * 2009-03-02 2014-10-28 Honeywell International Inc. Feature-based method and system for blur estimation in eye images
CN101894256B (zh) * 2010-07-02 2012-07-18 西安理工大学 基于奇对称2D Log-Gabor滤波器的虹膜识别方法
CN103854011A (zh) * 2012-12-03 2014-06-11 西安元朔科技有限公司 一种虹膜图像的质量评价方法
CN103198301B (zh) * 2013-04-08 2016-12-28 北京天诚盛业科技有限公司 虹膜定位方法及装置
CN105117705B (zh) * 2015-08-26 2018-08-24 北京无线电计量测试研究所 一种虹膜图像质量级联式评价方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129556A (zh) * 2011-04-14 2011-07-20 电子科技大学 一种虹膜图像清晰度判别方法
CN105447440A (zh) * 2015-03-13 2016-03-30 北京天诚盛业科技有限公司 实时虹膜图像评价方法和装置
CN105139019A (zh) * 2015-03-24 2015-12-09 北京天诚盛业科技有限公司 虹膜图像筛选的方法及装置
CN105160306A (zh) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 虹膜图像模糊判断的方法和装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WANG, HONG: "Non-Reference Iris Image Quality Assessment", CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 April 2016 (2016-04-15), pages 21 - 35, ISSN: 1674-0246 *
YAO, CUILI: "Image Quality Assessment Algorithms in Iris Recognition", CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 June 2012 (2012-06-15), pages 31 - 48, ISSN: 1674-0246 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503397A (zh) * 2023-06-26 2023-07-28 山东天通汽车科技股份有限公司 基于图像数据的车内传输带缺陷检测方法
CN116503397B (zh) * 2023-06-26 2023-09-01 山东天通汽车科技股份有限公司 基于图像数据的车内传输带缺陷检测方法

Also Published As

Publication number Publication date
CN107844737B (zh) 2020-10-27
CN107844737A (zh) 2018-03-27

Similar Documents

Publication Publication Date Title
WO2018050123A1 (zh) 虹膜图像的检测方法和装置
US10839510B2 (en) Methods and systems for human tissue analysis using shearlet transforms
Bautista et al. Convolutional neural network for vehicle detection in low resolution traffic videos
Arora et al. Applications of fractional calculus in computer vision: a survey
US7953253B2 (en) Face detection on mobile devices
US7643659B2 (en) Facial feature detection on mobile devices
Wasnik et al. Assessing face image quality for smartphone based face recognition system
US10186050B2 (en) Method and system for detection of contraband narcotics in human digestive tract
CN109636824A (zh) 一种基于图像识别技术的多目标计数方法
Ilhan et al. Automated sperm morphology analysis approach using a directional masking technique
Madheswaran et al. Classification of brain MRI images using support vector machine with various Kernels
DE102017220752A1 (de) Bildverarbeitungsvorrichtung, Bildbverarbeitungsverfahren und Bildverarbeitungsprogramm
WO2014066218A2 (en) Cast recognition method and device, and urine analyzer
CN109740429A (zh) 基于嘴角坐标平均值变化的笑脸识别方法
WO2016150239A1 (zh) 虹膜图像筛选的方法及装置
Kalam et al. Gender classification using geometric facial features
JP7241598B2 (ja) 画像処理方法、画像処理装置及び画像処理システム
Nirmala et al. Glaucoma detection using wavelet based contourlet transform
Deshpande et al. Single frame super resolution of non-cooperative iris images
Gim et al. A novel framework for white blood cell segmentation based on stepwise rules and morphological features
Rajendran et al. Leukocytes Classification and Segmentation in Microscopic Blood Smear Image
Budiman et al. The effective noise removal techniques and illumination effect in face recognition using Gabor and Non-Negative Matrix Factorization
Hernández Structural analysis of textures based on LAW´ s filters
Kaur et al. An analysis on gender classification and age estimation approaches
CN111488843A (zh) 基于漏报与误报率分步骤抑制的人脸墨镜判别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17850324

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17850324

Country of ref document: EP

Kind code of ref document: A1