CN107729834B - Rapid iris detection method based on differential block characteristics - Google Patents

Rapid iris detection method based on differential block characteristics Download PDF

Info

Publication number
CN107729834B
CN107729834B CN201710934259.7A CN201710934259A CN107729834B CN 107729834 B CN107729834 B CN 107729834B CN 201710934259 A CN201710934259 A CN 201710934259A CN 107729834 B CN107729834 B CN 107729834B
Authority
CN
China
Prior art keywords
iris
classifier
level
iris detection
differential block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710934259.7A
Other languages
Chinese (zh)
Other versions
CN107729834A (en
Inventor
张小亮
戚纪纲
王秀贞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN201710934259.7A priority Critical patent/CN107729834B/en
Publication of CN107729834A publication Critical patent/CN107729834A/en
Application granted granted Critical
Publication of CN107729834B publication Critical patent/CN107729834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rapid iris detection method based on difference block characteristics, which comprises the following steps: s1, training an iris detection cascade classifier; s2, importing an iris detection cascade classifier, acquiring an iris image to be detected, and constructing a candidate rectangular frame; s3, in the former M-level part, judging whether the iris is the iris by using the difference block pair characteristics and the LBP histogram characteristics; s4, in the later N-level part, judging whether the iris is the candidate rectangle frame by using the characteristics of the differential block pair, if so, putting the current candidate rectangle frame into the set of the iris undetermined rectangle frames; and S5, clustering the set of the rectangle frames to be determined of the iris to obtain the final rectangle frame. The front M-level classifier quickly rejects the non-iris candidate rectangular frame by adopting a strong and weak feature mixing mode, accelerates the detection process, and has higher speed and higher real-time property.

Description

Rapid iris detection method based on differential block characteristics
Technical Field
The invention relates to the field of iris detection, in particular to a rapid iris detection method based on difference block characteristics.
Background
With the popularization of biometric identification technology, more and more people pay attention to biometric identification technology and make related products appear in large quantities. Iris recognition, which is a technique in biometric recognition, has reliable recognition performance and has attracted considerable attention in recent years, and iris techniques exist in the fields of smart electronics, finance, mobile internet of things, and the like. The iris recognition technology is a system comprising various algorithm modules, and iris detection is one of the most basic algorithms and is the leading soldier of the iris recognition technology. An iris detection method with excellent performance determines the popularization of iris recognition products to a certain extent, and an iris detection algorithm with high speed and high detection rate can be deployed on low-end equipment.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention discloses a rapid iris detection method based on differential block characteristics, which accelerates the detection process by adopting a mode of mixing strong and weak characteristics, not only accelerates the detection process, but also ensures the detection rate, so that an iris detection algorithm can run on lower equipment.
A fast iris detection method based on difference block characteristics comprises the following steps:
s1, training and deriving an S-level iris detection cascade classifier: utilizing mixed characteristics formed by characteristics of differential block pairs and LBP histogram characteristics to train a front M (0< M < S) level iris detection cascade classifier, utilizing a differential block pair characteristic to train a rear N (M < N < S) level iris detection cascade classifier, storing position coordinates of the differential block, and exporting the trained S level iris detection cascade classifier;
s2, importing an S-level iris detection cascade classifier, acquiring an iris image to be detected, and constructing a candidate rectangular frame in a line-by-line scanning mode;
s3, in the former M-level part of the S-level iris detection cascade classifier, calculating the feature of a differential block pair and the feature of an LBP histogram for each candidate rectangular frame constructed in S2, judging whether the mixed feature formed by the two features is an iris through the former M-level iris detection cascade classifier, if so, turning to S4, otherwise, turning to S5;
s4, in the last N-level part of the S-level iris detection cascade classifier, calculating the differential block pair characteristics of each candidate rectangular frame constructed in S2, judging whether the characteristics are irises through the last N-level iris detection cascade classifier, if the characteristics are irises, putting the current candidate rectangular frame into an iris undetermined rectangular frame set, and turning to S5, otherwise, directly turning to S5;
s5, judging whether the candidate rectangular frame in the S2 is detected completely, if not, turning to S3; otherwise go to S6;
and S6, clustering the set of the undetermined rectangular frames of the iris to obtain a final rectangular frame, and if the set of the undetermined rectangular frames of the iris is zero, indicating that no iris exists in the current iris image to be detected.
Further, training the S-stage iris detection cascade classifier in S1 includes the following steps:
s11, importing positive and negative iris samples;
s12, extracting the difference block pair characteristics and the LBP histogram characteristics of each iris sample;
s13, training a front M-level iris detection cascade classifier by using a differential block pair characteristic and an LBP histogram characteristic by adopting an Adaboost algorithm;
s14, adopting an Adaboost algorithm, utilizing a differential block to train the characteristics of the N-level iris detection cascade classifier, and stopping training and switching to S15 if the characteristics are converged; otherwise, continuing training until convergence;
and S15, stopping training, and deriving an S-level iris detection cascade classifier.
Further, in S12, each iris sample is divided into a rows × b columns at equal intervals to obtain a × b patches, and the LBP histogram features of the iris samples in each patch are extracted and finally concatenated together to form the final LBP histogram features.
Further, S13 includes the steps of:
s131, randomly generating two non-repetitive coordinate points in an iris sample in a uniformly distributed mode, wherein the two coordinate points form a block;
s132, generating c blocks according to the mode of S131;
s133, randomly selecting 2 blocks from the c blocks in the S132 to form a combination, forming c x (c-1) ÷ 2 combinations together, and calculating the absolute value of the difference value in each combination by adopting an Adaboost algorithm to obtain the characteristic of a difference block pair;
s134, connecting the differential block pair characteristics in the S133 and the LBP histogram characteristics in the S12 in series to form mixed characteristics;
s135, training the M stages of iris detection cascade classifiers by using the mixed features in the S134, obtaining the scores of the leaf nodes of each stage of classifier and the passing threshold of each stage of classifier after training is finished, and storing the position coordinates of the differential blocks;
and S136, after the training of the first M-level iris detection cascade classifier is finished, turning to S14.
Further, S14 includes the steps of:
s141, randomly generating two non-repetitive coordinate points in an iris sample in a uniformly distributed mode, wherein the two coordinate points form a block;
s142, generating d blocks according to the mode of S131;
s143, randomly selecting 2 blocks from the d blocks in the S132 to form a combination, forming d x (d-1) ÷ 2 combinations together, and calculating the absolute value of the difference value in each combination by adopting an Adaboost algorithm so as to obtain the characteristic of a difference block pair;
s144, training the features by using the differential block in the S143, then detecting the cascade classifiers by using the N stages of irises, obtaining the scores of the leaf nodes of each stage of classifier and the passing threshold of each stage of classifier if the features are converged, storing the position coordinates of the differential block, and turning to the S145; otherwise, continuing training until convergence;
and S145, after the training of the subsequent N-stage iris detection cascade classifier is finished, the step is switched to S15.
Further, the difference block pair characteristic is a difference of a sum of gray values of one block region and a sum of gray values of another region.
Furthermore, the LBP histogram feature is that after the image block is scaled to a specified size, a plurality of small areas are divided, the LBP histogram feature is respectively extracted, and finally the LBP histogram features of the plurality of small areas are connected in series to form the final LBP histogram feature.
Further, the manner of constructing the candidate rectangular box in S2 is: and gradually sampling the iris image by fixed times to construct an L-layer image pyramid, and constructing a candidate rectangular frame on the ith (i is more than 0 and less than or equal to L) layer pyramid by taking equal intervals as step lengths in the horizontal and vertical directions.
Further, S3 includes the steps of:
s31, calculating LBP histogram characteristics of each candidate rectangular frame on the ith pyramid by the previous M-level iris detection cascade classifier, calculating differential block pair characteristics according to the position coordinates of the differential blocks stored by the previous M-level iris detection cascade classifier, and taking mixed characteristics formed by the LBP histogram characteristics and the differential block pair characteristics as expression characteristics of the iris;
s32, inputting a mixed characteristic into the jth (0< j ≦ M) level classifier, wherein the mixed characteristic falls into one leaf node of the jth level classifier;
s33, obtaining the score of the current leaf node, comparing the score with the passing threshold of the j-th classifier, if the score is larger than the passing threshold, the j-th classifier is passed, turning to S32, continuously comparing the mixed feature with the j + 1-th classifier, after the comparison of the former M-th classifier is finished, if the mixed feature passes through each classifier, judging that the mixed feature is an iris, and turning to S4; otherwise go to S5.
Further, S4 specifically includes the following steps:
s41, calculating the characteristics of differential block pairs for each candidate rectangular frame on the ith pyramid by the last N-level iris detection cascade classifier according to the position coordinates of the differential blocks stored by the last N-level iris detection cascade classifier;
s42, inputting the differential block pair characteristics calculated in S41 into a kth (M < k ≦ S) classifier, wherein the differential block pair characteristics fall into one leaf node in the kth classifier;
s43, obtaining the score of the current leaf node, comparing the score with the passing threshold of the kth classifier, if the score is larger than the passing threshold, indicating that the current leaf node passes through the kth classifier, going to step S42, continuously comparing the differential block feature with the kth +1 th classifier, after the calculation of the last N-level classifier is finished, judging that the current leaf node is an iris if the differential block passes through each level classifier, putting the current candidate rectangular frame into an iris undetermined rectangular frame set, going to S5, and if the current leaf node is not the current leaf node, directly going to S5;
the invention has the beneficial effects that: in the front part of the cascade classifier, strong and weak mixed features formed by simply-calculated differential block pair features and more-complicated-calculated LBP histogram features are adopted as expression features of the iris to express the attribute structure of the iris, and the characteristic of distinguishing non-iris regions is achieved, so that the detection method provided by the invention has high detection rate, and meanwhile, only the differential block features are adopted in the rear part of the cascade classifier; the iris detection algorithm provided by the invention can be operated on lower-end equipment, and has a wider application range.
Drawings
FIG. 1 is a block diagram of a fast iris detection process of the present invention;
FIG. 2 is a block diagram of a process for training an iris detection cascade classifier of the present invention;
FIG. 3 is a schematic diagram of randomly selecting two patches when computing differential patch pair features;
fig. 4 is a diagram illustrating image segmentation when computing LBP histogram features.
Detailed Description
The invention is described in further detail below with reference to specific embodiments and with reference to the attached drawings.
As shown in fig. 1, a fast iris detection method based on difference block features includes the following steps:
s1, training an S-level iris detection cascade classifier: utilizing mixed characteristics formed by characteristics of differential block pairs and LBP histogram characteristics to train a front M (0< M < S) level iris detection cascade classifier, utilizing a differential block pair characteristic to train a rear N (M < N < S) level iris detection cascade classifier, and storing position coordinates of the differential block;
s2, importing an S-level iris detection cascade classifier, acquiring an iris image to be detected, and constructing a candidate rectangular frame in a progressive scanning mode, wherein the method for constructing the candidate rectangular frame is as follows: gradually sampling the iris image by fixed times to construct an L-layer image pyramid, and constructing a candidate rectangular frame on the ith (i is more than 0 and less than or equal to L) layer pyramid by taking the size of a predefined fixed window as step length in the horizontal and vertical directions at equal intervals;
s3, in the former M-level part of the S-level iris detection cascade classifier, calculating a differential block pair characteristic and an LBP histogram characteristic for each candidate rectangular frame constructed in S2, taking a mixed characteristic formed by the two characteristics as an expression characteristic of the iris, judging whether the mixed characteristic is the iris through the former M-level iris detection cascade classifier, if so, turning to S4, and if not, turning to S5, specifically comprising the following steps:
s31, calculating LBP histogram characteristics of each candidate rectangular frame on the ith pyramid by the previous M-level iris detection cascade classifier, calculating differential block pair characteristics according to the position coordinates of the differential blocks stored by the previous M-level iris detection cascade classifier, and taking mixed characteristics formed by the LBP histogram characteristics and the differential block pair characteristics as expression characteristics of the iris;
s32, inputting a mixed feature into the jth (0< j < M) level classifier, wherein the mixed feature falls into one leaf node of the jth level classifier;
s33, obtaining the score of the current leaf node, comparing the score with the passing threshold of the j-th classifier, if the score is larger than the passing threshold, the j-th classifier is passed, transferring to S32, continuously comparing the mixed feature with the j + 1-th classifier, after the comparison of the former M-level classifier is finished, if the mixed feature passes through the former M-level classifier, judging that the mixed feature is an iris, and transferring to S4; otherwise go to S5;
s4, calculating the feature of the differential block pair of each candidate rectangular frame constructed in the S2 by the next N-level iris detection cascade classifier, judging whether the calculated feature of the differential block pair is an iris by the next N-level iris detection cascade classifier, if so, putting the current candidate rectangular frame into an undetermined rectangular frame set of the iris, turning to S5, and if not, directly turning to S5, specifically comprising the following steps:
s41, calculating the characteristics of differential block pairs for each candidate rectangular frame on the ith pyramid by the last N-level iris detection cascade classifier according to the position coordinates of the differential blocks stored by the last N-level iris detection cascade classifier;
s42, inputting the differential block pair characteristics calculated in S41 into a kth (M < k < S) level classifier, wherein the differential block pair characteristics fall into one leaf node in the kth level classifier;
s43, obtaining the score of the current leaf node, comparing the score with the passing threshold of the kth classifier, if the score is larger than the passing threshold, indicating that the score passes through the kth classifier, going to step S42, continuously comparing the differential block feature with the kth +1 stage classifier, after the calculation of the last N stage classifier is finished, judging that the result is an iris if the differential block feature passes through each stage classifier, putting the current candidate rectangular frame into an iris undetermined rectangular frame set, and going to S5; otherwise go directly to S5.
S5, judging whether the candidate rectangular frame in the S2 is detected completely, if not, turning to S3; otherwise, turning to S6, namely, turning to S6 after all the candidate rectangular frames on the L-layer pyramid in the S2 are calculated;
and S6, clustering the undetermined iris rectangle frame set to obtain a final rectangle frame, and judging that the iris image to be detected has no iris if the undetermined iris rectangle frame set is zero.
The training of the S-level iris detection cascade classifier in S1 specifically includes the following steps, as shown in fig. 2:
s11, importing positive and negative iris samples;
s12, dividing each iris sample into 4 rows and 4 columns at equal intervals to obtain 16 small blocks, extracting the LBP histogram characteristics of the iris samples in each small block, and finally connecting the small blocks in series to form the final LBP histogram characteristics.
S13, training a front M-level iris detection cascade classifier by adopting an Adaboost algorithm, and specifically comprising the following steps:
s131, randomly generating two non-repetitive coordinate points in an iris sample in a uniformly distributed mode, wherein the two coordinate points form a block;
s132, generating 500 blocks according to the mode of S131;
s133, randomly selecting 2 blocks from 500 blocks in S132 to form a combination, forming 122500(500 × 499 ÷ 2) combinations in a conformal manner, and calculating the absolute value of the difference value in each combination by adopting an Adaboost algorithm so as to obtain a difference block pair characteristic;
s134, connecting the differential block pair characteristics in the S133 and the LBP histogram characteristics in the S12 in series to form mixed characteristics;
s135, training the front M-level iris detection cascade classifier by using the mixed characteristics in the S134, obtaining the score of each leaf node of each level of classifier and the passing threshold of each level of classifier after the training of the front M-level iris detection cascade classifier is finished according to the principle that the samples with the same attribute are divided into one leaf node, and storing the position coordinates of the differential blocks;
and S136, after the training of the first M-level iris detection cascade classifier is finished, turning to S14.
S14, training a post-N-stage iris detection cascade classifier by adopting an Adaboost algorithm, and specifically comprising the following steps:
s141, randomly generating two non-repetitive coordinate points in an iris sample in a uniformly distributed mode, wherein the two coordinate points form a block;
s142, generating 500 blocks according to the mode of S131;
s143, randomly selecting 2 blocks from the 500 blocks in the S132 to form a combination, forming 122500(500 × 499 ÷ 2) combinations together, calculating the absolute value of the difference value in each combination by adopting an Adaboost algorithm, and finally calculating to obtain the feature of the difference block pair;
s144, training the N-level iris detection cascade classifier after training the features by using the difference block pair in the S143, obtaining the score of each leaf node and the threshold value of the current iris detection cascade classifier by training according to the principle that the sample with the same attribute is divided into one leaf node, obtaining the score of each leaf node of each level classifier and the threshold value of each level classifier after convergence, storing the position coordinates of the difference block, turning to the S145, and continuing training until convergence if the difference block is not converged;
and S145, after the training of the subsequent N-stage iris detection cascade classifier is finished, the step is switched to S15.
And S15, stopping training, and deriving an S-level iris detection cascade classifier.
The difference block pair is characterized by the difference between the sum of gray values of one block and the sum of gray values of the other block, as shown in fig. 3, a rectangular area a and a rectangular area B are two blocks in an image, the sum of gray values of all pixels in the rectangular area a and the rectangular area B is calculated respectively, and then the difference between the sum of gray values of the two areas is calculated.
The LBP histogram feature is that after the image block is scaled to a specified size, a plurality of small areas are divided, the LBP histogram feature is extracted respectively, and finally the LBP histogram features of the plurality of small areas are connected in series to form the final LBP histogram feature. As shown in fig. 4, for one division of the iris sample region, for the rectangular frame region in each division, the LBP histogram features are extracted, and then the LBP histogram features of all the rectangular frame regions are connected in series one by one to form the final LBP histogram features.
In S3, the mixed feature of each candidate rectangular box sequentially passes through the previous M-level iris detection cascade classifier, the mixed feature first falls into a leaf node of the 1 st-level classifier, the score of the leaf node is obtained, if the score of the leaf node is greater than the passing threshold of the 1 st-level classifier, the mixed feature continues to fall into a leaf node of the 2 nd-level classifier, the score of the current leaf node is obtained, the mixed feature is compared with the passing threshold of the 2 nd-level classifier, and so on until the mixed feature is compared with the M-level classifier, if the mixed feature passes through the M-level classifier, the iris is determined, and the process goes to S4; otherwise, go to S5.
In S4, the feature of the differential block pair of each candidate rectangular frame sequentially passes through the N-th-level iris detection cascade classifier, the feature of the differential block pair first falls into one leaf node of the M + 1-level classifier, the score of the leaf node is obtained, if the score of the leaf node is greater than the passing threshold of the M + 1-level classifier, the feature of the differential block pair is continuously caused to fall into one leaf node of the M + 2-level classifier, the score of the current leaf node is obtained, the current leaf node is compared with the passing threshold of the M + 2-level classifier, and so on until the feature of the differential block pair is compared with the S-th-level classifier, if the feature of the differential block pair passes through the S-level classifier, the current candidate rectangular frame is judged to be an iris, the current candidate rectangular frame is placed into the set of rectangular frames to be determined by the iris, and the process goes to S5; otherwise, go directly to S5.
The above-described embodiments are merely illustrative of one or more embodiments of the present invention, which are described in more detail and detail, but are not to be construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (8)

1. A fast iris detection method based on difference block characteristics is characterized by comprising the following steps:
s1, training and deriving an S-level iris detection cascade classifier: utilizing mixed characteristics formed by characteristics of differential block pairs and LBP histogram characteristics to train a front M (0< M < S) level iris detection cascade classifier, utilizing a differential block pair characteristic to train a rear N (M < N < S) level iris detection cascade classifier, storing position coordinates of the differential block, and exporting the trained S level iris detection cascade classifier;
s2, importing an S-level iris detection cascade classifier, acquiring an iris image to be detected, and constructing a candidate rectangular frame in a line-by-line scanning mode;
s3, in the former M-level part of the S-level iris detection cascade classifier, calculating the characteristics of a differential block pair and the characteristics of an LBP histogram for each candidate rectangular frame constructed in S2, judging whether the mixed characteristics formed by the two characteristics are the irises through the former M-level iris detection cascade classifier, and if the mixed characteristics are the irises, turning to S4; otherwise, go to S5; the step S3 includes the following steps:
s31, calculating LBP histogram characteristics of each candidate rectangular frame on the ith pyramid by the previous M-level iris detection cascade classifier, calculating differential block pair characteristics according to the position coordinates of the differential blocks stored by the previous M-level iris detection cascade classifier, and taking mixed characteristics formed by the LBP histogram characteristics and the differential block pair characteristics as expression characteristics of the iris;
s32, inputting the mixed feature into the jth (0< j ≦ M) level classifier, wherein the mixed feature falls into one leaf node of the jth level classifier;
s33, obtaining the score of the current leaf node, comparing the score with the passing threshold of the j-th classifier, if the score is larger than the passing threshold, indicating that the current leaf node passes through the j-th classifier, turning to S32, continuously comparing the mixed feature with the j + 1-th classifier, after the comparison of the former M-th classifier is finished, if the mixed feature passes through each classifier, judging that the mixed feature is an iris, and turning to S4; otherwise go to S5;
s4, in the last N-level part of the S-level iris detection cascade classifier, calculating the differential block pair characteristics of each candidate rectangular frame constructed in S2, judging whether the characteristics of the differential block pair are irises through the last N-level iris detection cascade classifier, if so, putting the current candidate rectangular frame into the set of rectangular frames to be determined by the irises, and turning to S5, otherwise, directly turning to S5; the step S4 specifically includes the following steps:
s41, calculating the characteristics of differential block pairs for each candidate rectangular frame on the ith pyramid by the last N-level iris detection cascade classifier according to the position coordinates of the differential blocks stored by the last N-level iris detection cascade classifier;
s42, inputting the differential block pair characteristics calculated in S41 into a kth (M < k ≦ S) classifier, wherein the differential block pair characteristics fall into one leaf node in the kth classifier;
s43, obtaining the score of the current leaf node, comparing the score with the passing threshold of the kth classifier, if the score is larger than the passing threshold, indicating that the score passes through the kth classifier, going to step S42, continuously comparing the differential block feature with the kth +1 stage classifier, after the calculation of the last N stage classifier is finished, judging that the result is an iris if the differential block feature passes through each stage classifier, putting the current candidate rectangular frame into an iris undetermined rectangular frame set, and going to S5; otherwise, go directly to S5;
s5, judging whether the candidate rectangular frame in the S2 is detected completely, if not, turning to S3; otherwise go to S6;
and S6, clustering the set of the undetermined rectangular frames of the iris to obtain a final rectangular frame, and if the set of the undetermined rectangular frames of the iris is zero, indicating that no iris exists in the current iris image to be detected.
2. The fast iris detection method based on difference block features of claim 1, wherein training the cascade classifier of S-stage iris detection in S1 comprises the following steps:
s11, importing positive and negative iris samples;
s12, extracting the difference block pair characteristics and the LBP histogram characteristics of each iris sample;
s13, training a front M-level iris detection cascade classifier by using a differential block pair characteristic and an LBP histogram characteristic by adopting an Adaboost algorithm;
s14, adopting an Adaboost algorithm, utilizing a differential block to train the characteristics of the N-level iris detection cascade classifier, stopping training and switching to S15 if the characteristics are converged, or continuing training until the characteristics are converged;
and S15, stopping training, and deriving an S-level iris detection cascade classifier.
3. The differential block feature-based fast iris detection method of claim 2, wherein in S12, each iris sample is divided into a rows x b columns at equal intervals to obtain a x b blocks, and LBP histogram features of the iris samples in each block are extracted and finally concatenated together to form the final LBP histogram features.
4. The fast iris detection method based on differential block features of claim 2, wherein S13 comprises the following steps:
s131, randomly generating two non-repetitive coordinate points in an iris sample in a uniformly distributed mode, wherein the two coordinate points form a block;
s132, generating c blocks according to the mode of S131;
s133, randomly selecting 2 blocks from the c blocks in the S132 to form a combination, forming c x (c-1) ÷ 2 combinations together, and calculating the absolute value of the difference value in each combination by adopting an Adaboost algorithm to obtain the characteristic of a difference block pair;
s134, connecting the differential block pair characteristics in the S133 and the LBP histogram characteristics in the S12 in series to form mixed characteristics;
s135, training the M stages of iris detection cascade classifiers by using the mixed features in the S134, obtaining the scores of the leaf nodes of each stage of classifier and the passing threshold of each stage of classifier after training is finished, and storing the position coordinates of the differential blocks;
and S136, after the training of the first M-level iris detection cascade classifier is finished, turning to S14.
5. The fast iris detection method based on differential block features of claim 2, wherein S14 comprises the following steps:
s141, randomly generating two non-repetitive coordinate points in an iris sample in a uniformly distributed mode, wherein the two coordinate points form a block;
s142, generating d blocks according to the mode of S131;
s143, randomly selecting 2 blocks from the d blocks in the S132 to form a combination, forming d x (d-1) ÷ 2 combinations together, and calculating the absolute value of the difference value in each combination by adopting an Adaboost algorithm so as to obtain the characteristic of a difference block pair;
s144, training the characteristics by using the differential block pair in the S143, then obtaining the scores of leaf nodes of each iris detection cascade classifier and the passing threshold value of the current iris detection cascade classifier if the characteristics are converged, storing the position coordinates of the differential block, and turning to the S145; otherwise, continuing training until convergence;
and S145, after the training of the subsequent N-stage iris detection cascade classifier is finished, the step is switched to S15.
6. The differential block feature-based fast iris detection method of claim 1, wherein the differential block pair feature is a difference of a sum of gray values of one region and a sum of gray values of another region.
7. The differential block feature-based fast iris detection method of claim 1, wherein the LBP histogram feature is obtained by scaling the image block to a designated size, dividing a plurality of small regions, extracting the LBP histogram features respectively, and finally connecting the LBP histogram features of the plurality of small regions in series to form the final LBP histogram feature.
8. The fast iris detection method based on differential block features of claim 1, wherein the candidate rectangular frame is constructed in S2 by: and gradually sampling the iris image by fixed times to construct an L-layer image pyramid, and constructing a candidate rectangular frame on the ith (i is more than 0 and less than or equal to L) layer pyramid by taking equal intervals as step lengths in the horizontal and vertical directions.
CN201710934259.7A 2017-10-10 2017-10-10 Rapid iris detection method based on differential block characteristics Active CN107729834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710934259.7A CN107729834B (en) 2017-10-10 2017-10-10 Rapid iris detection method based on differential block characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710934259.7A CN107729834B (en) 2017-10-10 2017-10-10 Rapid iris detection method based on differential block characteristics

Publications (2)

Publication Number Publication Date
CN107729834A CN107729834A (en) 2018-02-23
CN107729834B true CN107729834B (en) 2021-02-12

Family

ID=61209870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710934259.7A Active CN107729834B (en) 2017-10-10 2017-10-10 Rapid iris detection method based on differential block characteristics

Country Status (1)

Country Link
CN (1) CN107729834B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801067B (en) * 2021-04-13 2021-08-03 北京万里红科技股份有限公司 Method for detecting iris light spot and computing equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236608B (en) * 2008-01-25 2010-08-04 清华大学 Human face detection method based on picture geometry
CN102411709A (en) * 2011-12-02 2012-04-11 湖南大学 Iris segmentation recognition method
CN102629319B (en) * 2012-03-27 2014-02-19 中国科学院自动化研究所 Robust iris region segmentation method based on specific boundary detectors
CN105303200B (en) * 2014-09-22 2018-10-16 电子科技大学 Face identification method for handheld device
CN105303163B (en) * 2015-09-22 2019-03-01 深圳市华尊科技股份有限公司 A kind of method and detection device of target detection
CN106778478A (en) * 2016-11-21 2017-05-31 中国科学院信息工程研究所 A kind of real-time pedestrian detection with caching mechanism and tracking based on composite character

Also Published As

Publication number Publication date
CN107729834A (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN111401257B (en) Face recognition method based on cosine loss under non-constraint condition
CN111626371B (en) Image classification method, device, equipment and readable storage medium
CN108960080B (en) Face recognition method based on active defense image anti-attack
CN102682287B (en) Pedestrian detection method based on saliency information
CN106778796B (en) Human body action recognition method and system based on hybrid cooperative training
CN110929848B (en) Training and tracking method based on multi-challenge perception learning model
CN112734775A (en) Image annotation, image semantic segmentation and model training method and device
CN103902978B (en) Face datection and recognition methods
CN107657225B (en) Pedestrian detection method based on aggregated channel characteristics
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN109165658B (en) Strong negative sample underwater target detection method based on fast-RCNN
CN107066951B (en) Face spontaneous expression recognition method and system
CN110084130B (en) Face screening method, device, equipment and storage medium based on multi-target tracking
CN110706235B (en) Far infrared pedestrian detection method based on two-stage cascade segmentation
CN105243376A (en) Living body detection method and device
CN106529504A (en) Dual-mode video emotion recognition method with composite spatial-temporal characteristic
CN109034012A (en) First person gesture identification method based on dynamic image and video sequence
CN106874825A (en) The training method of Face datection, detection method and device
CN106650637A (en) Smiling face detector based on condition random forests and method
CN112926522A (en) Behavior identification method based on skeleton attitude and space-time diagram convolutional network
CN107729834B (en) Rapid iris detection method based on differential block characteristics
CN114037886A (en) Image recognition method and device, electronic equipment and readable storage medium
Zhu et al. A novel simple visual tracking algorithm based on hashing and deep learning
CN106446832B (en) Video-based pedestrian real-time detection method
CN110766093A (en) Video target re-identification method based on multi-frame feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100081 Room 204, building 3, Fuhai center, Daliushu, Haidian District, Beijing

Patentee after: Beijing wanlihong Technology Co.,Ltd.

Address before: 100081 Room 204, building 3, Fuhai center, Daliushu, Haidian District, Beijing

Patentee before: BEIJING SUPERRED TECHNOLOGY Co.,Ltd.