WO2017113039A1 - Iris region segmentation method and device based on active appearance model - Google Patents

Iris region segmentation method and device based on active appearance model Download PDF

Info

Publication number
WO2017113039A1
WO2017113039A1 PCT/CN2015/000940 CN2015000940W WO2017113039A1 WO 2017113039 A1 WO2017113039 A1 WO 2017113039A1 CN 2015000940 W CN2015000940 W CN 2015000940W WO 2017113039 A1 WO2017113039 A1 WO 2017113039A1
Authority
WO
WIPO (PCT)
Prior art keywords
human eye
texture
active appearance
model
appearance model
Prior art date
Application number
PCT/CN2015/000940
Other languages
French (fr)
Chinese (zh)
Inventor
王晓鹏
Original Assignee
王晓鹏
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 王晓鹏 filed Critical 王晓鹏
Priority to PCT/CN2015/000940 priority Critical patent/WO2017113039A1/en
Priority to CN201580085642.9A priority patent/CN109074471B/en
Publication of WO2017113039A1 publication Critical patent/WO2017113039A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to the field of image processing, and in particular, to an iris region segmentation method and apparatus based on an Active Appearance Model (AAM).
  • AAM Active Appearance Model
  • Biometric identification refers to the use of certain unique features of the human body to identify these features using certain techniques to identify the identity of the person. Compared with traditional authentication technology, biometric technology has higher effectiveness, security and reliability.
  • the iris is an annular region between the black pupil and the white sclera in the human eye. It contains many intertwined spots, filaments, crowns, stripes, crypts and other details.
  • pupils, eyelids, and eyelashes are usually taken together with the iris. Since iris recognition requires only an area between the pupil and the sclera that is not blocked by the eyelids and eyelashes, and other information, how to locate and segment the iris area becomes a hot spot and a difficult point in the iris recognition field.
  • the classical iris segmentation methods are: the integral/differential operator proposed by Danugman and the two localization algorithms proposed by Wildes combining edge detection and Hough transform.
  • the advantage is that it can be calculated on the grayscale image without preprocessing the image, but it also has the following disadvantages, namely, The speed becomes very slow when the coarse positioning of the center and radius of the outer circle of the iris is not accurate.
  • the spot formed when the iris image is acquired has an influence on the positioning accuracy, especially in uneven illumination. Positioning is easy to be mistaken when there are shadows, reflections, and occlusions.
  • the calculation amount is large and the calculation speed is relatively slow.
  • the two-part positioning algorithm proposed by Wildes combining edge detection and Hough transform because Hough transform is insensitive to noise in the image and robust, the advantage of this algorithm is that it is not sensitive to noise, but Because the computation of Hough transform is large and the extracted parameters are restricted by the quantization interval of the parameter space, the algorithm relies excessively on the accuracy of edge point detection. In addition, it needs to be in the three-dimensional parameter space when searching for the center and radius. Votes are made, so the amount of calculation and storage space is large.
  • the present invention has been made to solve the above technical problems, and an object thereof is to provide an accurate iris region segmentation method and apparatus for rapidly and robustly realizing segmentation of an iris region.
  • the present inventors have made intensive research to apply the active appearance model widely used in face modeling and face localization to the field of iris region segmentation, and fully consider the upper and lower eyelids to the iris. Based on the influence of occlusion, the active appearance model-based iris region segmentation method and apparatus of the present invention are proposed.
  • an iris region segmentation method is provided, wherein the iris region segmentation method is an iris region segmentation method based on an active appearance model, and the method includes: an active appearance model establishing step, using an advance Collecting a plurality of human eye sample images to establish an active appearance model composed of a human eye shape model and a human eye texture model; an active appearance model matching step, an input human eye image for performing iris region segmentation Matching the active appearance model established in the active appearance model establishing step to obtain a plurality of feature points presenting a contour of a human eye in the input human eye image; and a boundary fitting step from the Fitting a feature point for fitting each boundary in the input human eye image among a plurality of feature points obtained in the active appearance model matching step to obtain a segmented iris region, wherein The phase consistency information is utilized in both the active appearance model establishing step and the active appearance model matching step.
  • the texture features of the pupil, the iris, and the upper and lower eyelids can be obtained more clearly than the texture features obtained by the existing method.
  • the texture information can further establish a more accurate active appearance model than the prior art, and further, by utilizing the phase consistency information of the human eye image in the active appearance model matching step, the contour of the human eye can be presented very accurately, thereby enabling A more accurate iris region segmentation method is achieved than the prior art.
  • the active appearance model establishing step includes: a sample image phase consistency information calculating step, and calculating a phase consistent for each of the plurality of pre-acquired human eye sample images And the human eye texture model establishing step, using the calculated phase consistency information to establish a human eye texture model constituting the active appearance model.
  • the iris region segmentation method of the present invention by using the calculated phase coincidence information to assist in marking the human eye image, it is possible to obtain a texture model that is more accurate than the texture model established by the existing method.
  • the human eye texture model establishing step includes: a shape dividing step of aligning a mean shape of the pre-acquired plurality of human eye sample images with the calculated phase
  • the plurality of sample shapes obtained by marking the plurality of human eye sample images are respectively subjected to Delaunay triangulation;
  • the texture normalization step is performed by the slice affine transformation to calculate the calculated phase consistency information Mapping to the mean shape to obtain normalized sample texture information;
  • principal component analysis processing steps using the principal component analysis method to process the sample texture information to obtain a pattern Rational parameters, texture models.
  • the human eye texture model establishing step includes: a texture normalization step of separately using the corresponding point-based image registration algorithm to calculate the calculated phase consistency information Mapping to the mean shape to obtain normalized sample texture information; and principal component analysis processing steps, using the principal component analysis method to process the sample texture information to obtain a texture parameter and a texture model.
  • the active appearance model matching step includes: an input image phase consistency information calculation step of calculating phase consistency information for the input human eye image to be subjected to iris region segmentation; An image texture calculation step of calculating a texture of the input human eye image using the calculated phase consistency information of the input human eye image; and an appearance texture matching step of calculating the texture of the input human eye image
  • the textures of the active appearance models established in the active appearance model establishing step are matched to obtain a plurality of feature points that present a contour of a human eye in the input human eye image.
  • the active appearance model establishing step further includes: a sample image collecting step of pre-collecting the plurality of human eye sample images of the left and right eyes of different people; and the feature point calibration step Separating the feature points manually on the plurality of human eye sample images; the feature point alignment step aligning the corresponding feature points in the plurality of human eye sample images; and the human eye shape model establishing step, utilizing the features a feature point in the aligned plurality of human eye sample images in the point alignment step to establish a human eye shape model constituting the active appearance model; and a synthesizing step of the established human eye shape model and the The human eye texture model is combined to obtain the active appearance model.
  • Platts analysis is used to obtain an alignment image in which translation, scale, and rotation are removed.
  • the human eye shape model and the human eye texture are obtained by principal component analysis. model.
  • iris region segmentation method of the present invention in order to obtain a shape model and a texture model, Principal component analysis is used to process the data, thereby reducing the amount of data that needs to be processed and saving computation time.
  • the strip boundaries include an iris boundary, a pupil boundary, and an upper and lower eyelid boundary.
  • a plurality of feature points obtained in the active appearance model matching step are selected to be located on the left side of the iris At least a portion of the feature points on the boundary and the right edge of the iris are fitted.
  • the iris region segmentation method when the pupil boundary is fitted in the boundary fitting step, selecting a plurality of feature points obtained in the active appearance model matching step is located on a pupil boundary At least a portion of the feature points are fitted to fit.
  • the upper and lower eyelids are selected from a plurality of feature points obtained in the active appearance model matching step. At least a portion of the feature points at the intermediate portion of the boundary that are spaced apart from the corner of the eye are fitted.
  • an iris region segmentation device wherein the iris region segmentation device is an iris region segmentation device based on an active appearance model, the device comprising: an active appearance model establishing device, An active appearance model composed of a human eye shape model and a human eye texture model is configured to utilize a plurality of pre-acquired human eye sample images; an active appearance model matching device configured to input a person to be subjected to iris region segmentation The eye image is matched with the active appearance model established in the active appearance model building device to obtain a plurality of feature points that present a contour of a human eye in the input human eye image; and a boundary fitting device that is Configuring to select a feature point for fitting each boundary in the input human eye image from a plurality of feature points obtained in the active appearance model matching device to perform fitting to obtain the segmented iris region
  • the phase 1 is utilized in both the active appearance model establishing device and the active appearance model matching device Information.
  • An iris region segmentation device by establishing an apparatus in an active appearance model
  • the phase consistency information of the human eye image is utilized, so that the texture features of the pupil, the iris, and the upper and lower eyelids can be obtained more clearly than the texture features obtained by the existing method, thereby being able to establish a more accurate active appearance than the prior art.
  • the human eye contour can be presented very accurately, and an iris region dividing device more accurate than the prior art can be realized.
  • the active appearance model establishing device includes: a sample image phase consistency information calculation portion configured to each of the pre-acquired plurality of human eye sample images The amplitude calculation phase consistency information; and a human eye texture model establishing section configured to use the calculated phase consistency information to establish a human eye texture model constituting the active appearance model.
  • the iris region dividing device of the present invention by using the calculated phase coincidence information to assist in marking the human eye image, it is possible to obtain a texture model which is more accurate than the texture model established by the existing method.
  • the human eye texture model establishing portion includes: a shape dividing unit configured to calculate an average shape of the pre-acquired plurality of human eye sample images and utilize the calculation The plurality of sample shapes obtained by marking the plurality of human eye sample images are respectively subjected to Delaunay triangulation; the texture normalization unit is configured to pass the slice affine transformation The calculated phase consistency information is respectively mapped to the mean shape to obtain normalized sample texture information; and a principal component analysis processing unit configured to process the sample texture information by principal component analysis, Get texture parameters, texture models.
  • the human eye texture model establishing portion includes: a texture normalization unit configured to utilize the corresponding point-based image registration algorithm to calculate the calculated phase Consistency information is respectively mapped to the mean shape to obtain normalized sample texture information; and a principal component analysis processing unit configured to process the sample texture information by using principal component analysis to obtain texture parameters, Texture model.
  • the active appearance model matching device includes: an input image phase consistency information calculation portion configured to calculate a phase coincidence for the input human eye image to be subjected to iris region segmentation Information information; an input image texture calculation section configured to calculate a texture of the input human eye image using the calculated phase consistency information of the input human eye image; and an appearance texture matching section configured to calculate Matching the texture of the input human eye image with the texture of the active appearance model established in the active appearance model building device to obtain a plurality of feature points presenting a contour of a human eye in the input human eye image .
  • the active appearance model establishing device further includes: a sample image collecting portion configured to pre-collect the plurality of human eye sample images of the left and right eyes of different people; a feature point calibration portion configured to manually calibrate feature points on the plurality of human eye sample images; a feature point alignment portion configured to align corresponding feature points in the plurality of human eye sample images; a human eye shape model establishing portion configured to establish a human eye shape model constituting the active appearance model by using feature points in the plurality of human eye sample images aligned in the feature point alignment portion; And a synthesizing portion configured to combine the established human eye shape model and the human eye texture model to obtain the active appearance model.
  • a sample image collecting portion configured to pre-collect the plurality of human eye sample images of the left and right eyes of different people
  • a feature point calibration portion configured to manually calibrate feature points on the plurality of human eye sample images
  • a feature point alignment portion configured to align corresponding feature points in the plurality of human eye sample images
  • a human eye shape model establishing portion configured to establish
  • the feature point alignment portion is configured to use a Platts analysis to obtain an aligned image that removes translation, scale, and rotation.
  • the human eye shape model establishing portion and the human eye texture model establishing portion are configured to obtain the human eye shape model and the human eye texture by principal component analysis. model.
  • the data is processed using principal component analysis, whereby the amount of data to be processed can be reduced, and the calculation time can be saved.
  • the strip boundaries include an iris boundary, a pupil boundary, and an upper and lower eyelid boundary.
  • the boundary fitting device in the boundary fitting device, at least a part of feature points located on the left side boundary of the iris and the right side boundary of the iris are selected from a plurality of feature points obtained by the active appearance model matching device to perform fitting.
  • a plurality of feature points obtained by the active appearance model matching device are selected from a boundary of the pupil At least a portion of the feature points are fitted.
  • the upper and lower eyelid boundaries are selected from a plurality of feature points obtained by the active appearance model matching device At least a portion of the feature points at the intermediate portion spaced from the corner of the eye are fitted to fit.
  • FIG. 1 is a flow chart of an iris region segmentation method 100 in accordance with one embodiment of the present invention.
  • FIG. 2 is a diagram showing an implementation flow of a step S101 of establishing a human eye active appearance model according to an embodiment of the present invention
  • Figure 3 is an example acquisition image of the human eye
  • Figure 4 is a schematic view of selected human eye feature points
  • FIG. 5 is a diagram showing phase consistency information calculated for an example captured image
  • FIG. 6 is a diagram showing an implementation flow of a step S1015 of establishing a human eye texture model according to an embodiment of the present invention
  • Figure 7 is a schematic diagram of Delaunay triangulation of a point set
  • Figure 8 is a schematic diagram of a linear affine of a slice
  • Figure 9 is a diagram showing an implementation flow of a step S1015' of establishing a human eye texture model according to an alternative embodiment of the present invention.
  • Figure 10 is an active appearance model and a new human eye image in accordance with one embodiment of the present invention. a diagram matching the implementation flow of step S102;
  • FIG. 11 is a view showing a series of feature points reflecting a contour of a human eye obtained by matching a new human eye image to an active appearance model to be subjected to iris region segmentation;
  • Figure 12 is a diagram showing the result of fitting
  • Figure 13 is a block diagram of an iris region segmentation device 1300, in accordance with one embodiment of the present invention.
  • FIG. 14 is a block diagram of an iris region segmentation device 1400 in accordance with an alternate embodiment of the present invention.
  • FIG. 1 is a flow chart of an iris region segmentation method 100 in accordance with one embodiment of the present invention.
  • step S101 an active appearance model of the human eye is established.
  • the active appearance model is used for boundary verification and image segmentation, which is formed by using shape information and texture information of an image to create a shape model and a texture model, and then combining the two.
  • the purpose is to obtain the shape of the target area, the affine transformation coefficient, and the like from a pre-trained model.
  • the following is an example to illustrate how to build an active appearance model of the human eye.
  • FIG. 2 is a diagram showing an implementation flow of a step S101 of establishing a human eye active appearance model according to an embodiment of the present invention.
  • the human eye sample image is acquired and the feature points are calibrated (step S1011). Specifically, a clear image I of the left and right eyes of different people is collected.
  • the image shown in Figure 3 is a sharp image I that was acquired.
  • n feature points are calibrated, feature points whose texture features change significantly (for example, upper and lower eyelid boundaries, iris boundaries, pupil boundaries, etc.) are selected. Among them, it should be noted that due to the occlusion of the eyelid, the upper and lower boundaries of the iris may not exist. Therefore, when selecting the feature points of the iris boundary, instead of selecting all the feature points on the circular boundary of the iris, only the left and right sides of the iris are selected. The feature points of the part that is blocked by the eyelids.
  • Figure 4 is a schematic illustration of selected human eye feature points. Due to the limitation of the physical structure, points at the pupil, eyelashes, and the like may be collected when the iris image is acquired. Therefore, in order to avoid the influence of the pupil, the eyelash, and the like, a total of 68 feature points are selected in the present embodiment. The position of the selected 68 feature points is shown in FIG. 4, wherein feature points 19 to 36 are selected at the upper eyelid boundary, a total of 18 points, and feature points 1 to 18 are selected at the lower eyelid boundary.
  • feature points 57 to 68 were selected at the pupil boundary, a total of 12 points, and 10 feature points were selected at the left boundary of the iris not blocked by the eyelid and the right boundary of the iris, ie, the left side Feature points 52 to 56, 37 to 41 and feature points 42 to 51 on the right side.
  • phase coincidence information of the human eye image is calculated for each of the N clear images I (step S1012). Since the human eye image is mainly understood based on low-level features such as step edges and zero-crossing edges in the image, unlike the prior art, the active appearance model of the present invention is used in the process of establishing the active appearance model.
  • Phase consistency information that contributes to the spatial resolution of edge detection. This is a method of edge detection and texture analysis using frequency domain space. Phase consistency refers to a measure of the phase similarity of each frequency component at each position of the image. It is a dimensionless quantity whose value decreases from 1 to 0, indicating a decrease from a salient feature to a no feature.
  • phase consistency information of the human eye clear image I at the feature point x can be calculated by the following formula (1):
  • the transfer function of the two-dimensional log-Gabor filter in the frequency domain is defined as follows:
  • ⁇ 0 is the center frequency of the filter
  • ⁇ r is the bandwidth of the filter
  • ⁇ ⁇ is the angular bandwidth of the filter
  • the phase coincidence information is calculated by the above formula for the image shown in Fig. 3 after the calibration feature point, and the result is shown in Fig. 5.
  • Fig. 5 by using the phase coincidence information, it is possible to obtain an eye contour image in which the texture features of the upper and lower eyelids, the pupil, and the iris are very clear.
  • the corresponding feature points in the above-described N sharp images I are aligned (step S1013). Specifically, performing a Platts analysis on the N clear images I after the calibration feature points, wherein the N is calculated separately
  • the center of gravity of the shape of the sharp image I is the center of gravity of the shape
  • the center of gravity of the N shapes is moved to the same position, and then the shape of the N clear images I is expanded or reduced to the same size, and finally passed through two clear images I
  • the position of the corresponding point of the shape is used to calculate the difference in the rotation angle, and then the object is rotated so that the angle of the shape of the clear image I is uniform.
  • the corresponding feature points of the different images are aligned to obtain a human eye image that removes the alignment of the translation, scale, and rotation.
  • step S1014 After the feature points in the acquired N sharp images I are aligned by the above-described step S1013, a shape model constituting the active appearance model is established (step S1014).
  • the n feature points after each image have been aligned are connected to form a shape vector s i , and the N clear images I are spliced into an N ⁇ 2n human eye shape matrix s by using the following formula (5):
  • the eigenvalues and eigenvectors of the covariance matrix U are calculated, and then the eigenvalues are sorted in descending order, and the eigenvectors corresponding to the first k largest eigenvalues are taken.
  • the energy of the first k eigenvalues is made more than 95% of the total energy.
  • PCA Principal Component Analysis
  • ⁇ s is a transformation matrix formed by principal component eigenvectors obtained by principal component analysis
  • b s is a statistical shape parameter that controls shape change.
  • the calculated phase consistency information is used to establish a texture model constituting the active appearance model (step S1015).
  • FIG. 10 An implementation flow of the establishing step S1015 of the human eye texture model according to an embodiment of the present invention is shown in FIG.
  • step S1015a the above mean shape
  • the N sample shapes each characterized by a series of feature points obtained by marking the N clear images I using the calculated phase coincidence information of the N clear images I are respectively subjected to Delaunay triangulation.
  • the so-called Delaunay triangulation is a technique of joining spatial points into triangles to maximize the minimum angle of all triangles.
  • the point of the Delaunay triangulation is that the circumscribed circle of any triangle does not include any other vertices.
  • Figure 7 is a schematic diagram of Delaunay triangulation for a set of points. The process of a method of Delaunay triangulation shown in Figure 7 is as follows:
  • Delaunay triangulation process to apply the above mean shape Delaunay triangulation is performed separately from the above N sample shapes, thereby obtaining the above mean shape And the above N sample shapes are divided into a series of triangles.
  • step S1015b phase coincidence information of the acquired N sharp images I is mapped to the mean shape by slice affine transformation Achieve normalization of textures. Due to the above mean shape The triangles obtained by dividing the above N sample shapes by the Delaunay triangle are mutually corresponding, so that the corresponding mean shape can be calculated by the piecewise linear affine projection according to the position of each point in the triangle of the sample shape. Position in the triangle, then map the value of the phase consistency of the point to the mean shape The position of the corresponding point in .
  • Figure 8 is a schematic illustration of the linear affine of the slice.
  • the triangle on the left side and the triangle on the right side respectively represent the triangle shape obtained by dividing the sample shape and the above-described mean shape by the Delaunay triangle.
  • the positions and correspondences of the vertices v 1 , v 2 , v 3 and v' 1 , v' 2 , v' 3 of the two triangles are known.
  • the linear affine transformation based on the barycentric coordinates can be used to obtain the position of the corresponding point p' in the triangle of the mean shape, and the corresponding point is completed.
  • Mapping of phase consistency information ie, texture information
  • phase consistency information (ie, texture information) of each of the above N sharp images I can be mapped to the above average shape by the above method.
  • Upper, thereby, normalizing the texture that is, mapping the phase consistency information (ie, texture information) of the N sample shapes to the uniform reference system by the slice linear affine transformation , for the creation of the texture model for the next step.
  • step S1015c all normalized sample texture information is processed by principal component analysis to obtain texture parameters, thereby obtaining a texture model. Specifically, first, averaging all normalized sample texture information to obtain an average texture Next, principal component analysis is performed by a method similar to the above-described step S1014, and feature vectors corresponding to the first m feature values sorted by the size of the feature values are obtained. Then, these corresponding feature vectors are composed into a principal component analysis projection matrix ⁇ g to obtain a texture model given by the following formula (9):
  • ⁇ g is the transformation matrix formed by the texture principal component eigenvector obtained by principal component analysis
  • b g is the statistical texture parameter that controls the texture change.
  • the average texture Based on this, a new texture model can be obtained by adjusting the statistical texture parameter b g .
  • step S1015 of the human eye texture model shown in FIG. 6 is merely an example, and various modifications can be made to achieve the same effect.
  • the image registration algorithm based on the corresponding points may be used to map the phase consistency information of the acquired N sharp images I to the above average shape. Achieve normalization of textures.
  • the implementation flow of this alternative is shown in FIG. Specifically, first, phase matching information of the acquired N sharp images I is mapped to the above average shape using a corresponding point-based image registration algorithm such as an image registration algorithm based on a thin plate spline function.
  • step S1015'a the basic idea of the image registration algorithm based on corresponding points is to calculate two or more images acquired under different conditions or acquired by different imaging devices by calculating an optimal spatial transformation
  • the positions of the corresponding feature points in the image are in one-to-one correspondence (step S1015'a), and then, similarly to FIG. 6, all the normalized sample texture information is processed by principal component analysis to obtain texture parameters, thereby obtaining a texture model. (Step S1015c).
  • step S1012 the phase consistency information of the human eye image is calculated for the N sharp images I (step S1012), it is shown in FIG. 2 for the above-described N sharp images I.
  • the feature points are aligned (step S1013) and the shape models constituting the active appearance model are established (step S1014), but the order of the steps between them is not limited thereto.
  • step S1012 and the above-described step S1014 are performed before the texture model is established (step S1015/step S1015') and the above-described step S1013 is performed before the above-mentioned step S1014, the order of the steps can be arbitrarily changed. Or can be executed at the same time.
  • the above step S1012 and the above step S1013 may be simultaneously performed, and then the above step S1014 and the above step S1015/step S1015' may be sequentially performed; or, the above step S1012 may be performed after the above step S1013 is performed, and then the above step S1014 and the above are sequentially performed.
  • the above-described step S1012 may be performed, and then the above-described step S1015 / step S1015' and the like are performed.
  • step S1016 the two models are combined into an active appearance model. Specifically, first, b s and b g are connected according to the following formula (10) to obtain an appearance feature vector b:
  • w s is used to adjust the difference between the dimension b s and b g diagonal matrix.
  • principal component analysis is performed on the obtained appearance feature vector b to further eliminate the correlation between the shape and the texture, thereby obtaining an active appearance model given by the following formula (11):
  • I the average appearance vector
  • Q is the transformation matrix formed by the principal component eigenvectors obtained by principal component analysis
  • c is the appearance model parameter that controls the appearance change.
  • step S102 the active appearance model obtained in step S101 is used to match a new human eye image to be subjected to iris region segmentation different from the above-described N sharp images I to obtain an accurate representation of the new human eye.
  • a series of feature points of the human eye contour in the image is used to match a new human eye image to be subjected to iris region segmentation different from the above-described N sharp images I to obtain an accurate representation of the new human eye.
  • the following is an example to illustrate how to match the active appearance model to a new human eye image.
  • FIG. 10 is a diagram showing an implementation flow of a matching step S102 of an active appearance model and a new human eye image according to an embodiment of the present invention.
  • phase coincidence information is calculated for a new human eye image I n to be subjected to iris region segmentation (step S1021).
  • the calculation method can be the same as that employed in step S1012 in FIG.
  • step S1022 using the phase coincidence information calculated in the above step S1021, the human eye image I n is calculated to be deformed to the mean shape according to the current shape s.
  • the resulting texture g s (step S1022).
  • step S101 the appearance model parameter c in the active appearance model obtained in step S101 is continuously changed to optimize the objective function given by the following formula (12) until the appearance texture of the active appearance model and the human eye image I n
  • the appearance texture is consistent (step S1023):
  • g s is the texture of the new human eye image I n to be subjected to the iris region segmentation
  • g m is the texture of the active appearance model obtained in step S101
  • V. Update the number of iterations t t+1, and determine whether the difference ⁇ ′ g between the texture of the human eye image I n and the texture of the active appearance model is less than a threshold ⁇ , if it is less than, then exit; otherwise, return to III. If the number of iterations exceeds a predetermined number of times, the human eye is considered not to be included in the image.
  • FIG. 11 A human eye image for which iris region segmentation is to be performed is shown in FIG.
  • the texture of the human eye image shown in FIG. 11 is matched with the texture of the previously established active appearance model by the matching step S102 as described above, and a series of feature points are obtained on the human eye image.
  • these feature points coincide very precisely with the iris boundary, the pupil boundary, and the upper and lower eyelid boundaries in the human eye image, and these feature points very accurately represent the contour of the human eye image.
  • any new human eye image I n to be subjected to iris region segmentation as long as the objective function given by the above formula (12) is continuously optimized in the matching step S102 as described above until the iris region segmentation is to be performed
  • the difference between the texture of the human eye image I n and the texture of the previously established active appearance model is less than a predetermined threshold, so that a plurality of feature points that almost completely coincide with the boundaries of the human eye image I n can be obtained, not only It can ensure the overall matching accuracy, and can also ensure the matching precision at each feature point.
  • the contour of the human eye can be presented more accurately, providing accurate information for subsequent iris region segmentation.
  • Step S103 After matching the previously established active appearance model to the new human eye image to be subjected to the iris region segmentation to obtain a series of feature points that accurately represent the human eye contour in the new human eye image, return to FIG. 1 and enter Step S103 shown in FIG.
  • a plurality of feature points are selected from the series of feature points obtained by the above-described step S102 to fit the iris boundary, the pupil boundary, and the boundary of the upper and lower eyelids, respectively, using the least squares method.
  • the result of the fitting is shown in FIG. The selection of feature points for fitting the boundaries of the iris, the pupil boundary, and the upper and lower eyelids will be described below.
  • the feature points on the left side boundary of the iris and the right side boundary of the iris which are not blocked by the upper and lower eyelids are selected.
  • the 20 feature points 37 to 56 shown in FIG. 4 can be selected to fit the iris boundary.
  • the iris boundary obtained by fitting using the 20 feature points of 37 to 56 is as shown in FIG.
  • the selection method of the feature points for fitting the iris boundary is not limited thereto, and only a part of the feature points may be used to fit the iris boundary, for example, using the three 38, 48, 56 shown in FIG. A combination of feature points or other number of feature points to fit the iris boundary.
  • the pupil boundaries are generally not affected by the upper and lower eyelids, all of the feature points 57 to 68 on the pupil boundary can be used for fitting.
  • the pupil boundary obtained by fitting using the 12 feature points of 57 to 68 is as shown in FIG.
  • the selection method of the feature points for fitting the pupil boundary is not limited thereto, and only the partial feature points may be used to fit the pupil boundary, for example, using the three numbers of 58, 58, and 66 shown in FIG. A combination of feature points or other number of feature points to fit the pupil boundary.
  • the features near the left and right eye corners The point is not suitable for fitting the parabola, so only the 10 feature points of 23 to 32 are used to fit the upper eyelid.
  • the fitted upper eyelid boundary is shown in Figure 12.
  • the selection method of the feature points for fitting the upper eyelid boundary is not limited thereto, and a smaller number of feature points than the above 10 feature points may be used to fit the upper eyelid boundary, for example, using the method shown in FIG. A combination of three feature points 25, 28, 30 or other number of feature points to fit the upper eyelid boundary.
  • the feature points near the left and right eye corners are not suitable for fitting the parabola, and therefore, only the 10 feature points of 5 to 14 are used to fit the lower eyelid.
  • the fitted lower eyelid boundary is shown in Figure 12.
  • the selection method of the feature points for fitting the lower eyelid boundary is not limited thereto, and a lower number of feature points than the above 10 feature points may be used to fit the lower eyelid boundary, for example, as shown in FIG. The combination of the three feature points 7, 10, 12 or other number of feature points to fit the lower eyelid boundary.
  • a common area located below the upper eyelid, above the lower eyelid, outside the pupil boundary, and within the iris boundary can be obtained.
  • This public area is effective.
  • the iris area (as shown in Figure 12), thereby completing the segmentation of the iris area.
  • the method of the present invention by using the phase consistency information of the human eye image for the establishment of the texture model in the active appearance model, it is possible to obtain texture information with clearer texture features of the pupil, the iris, and the upper and lower eyelids, thereby establishing a ratio.
  • the more accurate active appearance model of the prior art in addition, by using the phase consistency information of the human eye image for the matching of the human eye image and the active appearance model, the contour of the human eye can be presented very accurately, and thus can be realized more than the existing one. More accurate iris segmentation method.
  • Figure 13 is a block diagram of an iris region segmentation device 1300, in accordance with one embodiment of the present invention.
  • the iris region dividing device 1300 includes an active appearance model establishing device 1301, an active appearance model matching device 1302, and a boundary fitting device 1303.
  • the active appearance model establishing device 1301 is a device for establishing an active appearance model of a human eye image, and includes: a clear image for collecting left and right eyes of different people. a sample image acquisition unit 1301a; a feature point calibration unit 1301 for manually calibrating feature points on each of the acquired sharp images; and a sample image phase for calculating phase consistency information of the human eye image for each clear image a sex information calculation unit 1301c; a feature point alignment unit 1301d for aligning corresponding feature points in all the acquired clear images; for constructing a shape model constituting the active appearance model by using feature points in all the clear images after alignment Human eye shape model establishing section 1301e; a human eye texture model establishing section 1301f for constructing a texture model constituting an active appearance model using the calculated phase consistency information; and a human eye shape model and a person for establishing the human eye shape
  • the eye texture model is combined to obtain a synthesis portion 1301g of the human eye active appearance model.
  • the human eye texture model establishing section 1301f includes: a Delaunay triangulation for all the sample shapes obtained by marking the mean shape and the phase consistency information using all the calculated clear images to mark all the clear images.
  • a shape division unit (not shown); a texture normalization unit for mapping the phase consistency information of all the acquired clear images to the mean shape of the clear images by the slice affine transformation (in the figure) Not shown); and a principal component analysis processing unit (not shown) for processing all normalized sample texture information by principal component analysis to obtain texture parameters and texture models.
  • the active appearance model establishing means 1301 inputs the human eye sample images of the left and right eyes of different people, and generates the active appearance model of the human eye through the processing of each of the parts 1301a to 1301g, and the active appearance is obtained.
  • the model matching device 1302 outputs.
  • the active appearance model matching device 1302 is a device for matching a new human eye image to be subjected to iris region segmentation with an active appearance model of the human eye outputted from the active appearance model matching device 1302, and includes: An input image phase coincidence information calculation unit 1302a that calculates phase consistency information for a new human eye image to perform iris region segmentation; for calculating a new human eye to be subjected to iris region segmentation using the calculated phase coincidence information An input image texture calculation unit 1302b of the texture of the image; and a texture for the new human eye image to be subjected to the division of the iris region to be calculated The texture of the active appearance model output by the dynamic appearance model matching means 1302 matches the appearance texture matching section 1302c.
  • the active appearance model matching device 1302 is configured to input an input image and an active appearance model of the iris region to be segmented, and obtain a series of feature points for presenting the contour of the input image by processing the portions 1302a to 1302c therein. The output is performed to the above-described boundary fitting device 1303.
  • the boundary fitting device 1303 is configured to select a plurality of suitable feature points from the series of feature points to respectively fit the iris boundary, the pupil boundary, and the boundary of the upper and lower eyelids by using a least square method, including: An iris boundary fitting portion 1303a for fitting an iris boundary of the input image; a pupil boundary fitting portion 1303b for fitting a pupil boundary of the input image; an upper eyelid boundary for fitting an upper eyelid boundary of the input image A fitting unit 1303c; and a lower eyelid boundary fitting unit 1303d for fitting a lower eyelid boundary of the input image.
  • the boundary fitting device 1303 receives a series of feature points obtained by matching, and through the processing of the respective portions 1303a to 1303d, the division of the iris region of the input image is completed, and an effective iris region is obtained.
  • FIG. 13 a block diagram of an iris region segmentation device 1400 in accordance with an alternate embodiment of the present invention is shown in FIG.
  • the iris area dividing device 1400 is different from the iris area dividing device 1300 shown in FIG. 13 only in that an active appearance model establishing means 1401 is used instead of the active appearance model establishing means 1301 shown in FIG. 13, further In place of the human eye texture model establishing unit 1301f shown in FIG. 13, the human eye texture model establishing unit 1401f is included in the active appearance model establishing means 1401.
  • the human eye texture model establishing unit 1401f includes: texture normalization for mapping the phase consistency information of all the acquired clear images to the mean shape of the sharp images by using the image registration algorithm based on the corresponding points. Unit (not shown); And a principal component analysis processing unit (not shown) for processing all normalized sample texture information by principal component analysis to obtain a texture parameter and a texture model.
  • the iris area dividing device 1400 shown in Fig. 14 is the same as the iris area dividing device 1300 shown in Fig. 13.
  • the sample image phase coincidence information calculating section that calculates the phase coincidence information of the human eye image for each clear image is included in the active appearance model establishing means, and the calculated phase consistency information is used to establish the composition.
  • the human eye texture model establishing portion of the texture model of the active appearance model thereby obtaining texture information with clearer texture features of the pupil, the iris, and the upper and lower eyelids, thereby being able to establish a more accurate active appearance model than the prior art
  • the active appearance model matching device includes an input image phase consistency information calculation portion that calculates phase consistency information for a human eye image to be subjected to iris region segmentation, and a texture and a texture of a human eye image obtained based on the calculated phase coincidence information
  • the texture of the active appearance model is matched to the appearance texture matching portion, so that the contour of the human eye can be presented very accurately, thereby enabling an iris region segmentation device that is more accurate than the prior art.
  • the present invention can be implemented in software and/or a combination of software and hardware.
  • the various devices of the present invention can be implemented using an application specific integrated circuit (ASIC) or any other similar hardware device.
  • the software program of the present invention may be executed by a processor to implement the steps or functions described above.
  • the software programs (including related data structures) of the present invention can be stored in a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like.
  • some of the steps or functions of the present invention may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.

Abstract

An iris region segmentation method and device based on an active appearance model. The method comprises: creating an active appearance model consisting of a human eye shape model and a human eye texture model by using a plurality of pre-collected human eye sample images (S101); matching an inputted human eye image on which iris region segmentation will be performed and the pre-created active appearance model, to obtain a plurality of feature points that present a human eye contour in the inputted human eye image (S102); and selecting, from the multiple feature points, feature points used for fitting boundaries in the inputted human eye image to perform fitting, so as to obtain a segmented iris region (S103), wherein phase congruency information is used for both active appearance model creating and active appearance model matching.

Description

一种基于主动外观模型的虹膜区域分割方法及装置Iris region segmentation method and device based on active appearance model 技术领域Technical field
本发明涉及图像处理领域,尤其涉及一种基于主动外观模型(AAM:Active Appearance Model)的虹膜区域分割方法及装置。The present invention relates to the field of image processing, and in particular, to an iris region segmentation method and apparatus based on an Active Appearance Model (AAM).
背景技术Background technique
当今社会是一个高度信息化的社会,一方面,人们对信息的需求越来越大;另一方面,对信息安全性的要求也越来越高。传统的身份验证技术有证件、磁卡、密码等,但是这些验证技术的安全性不高。因而,生物识别技术应运而生。生物识别是指,利用人体的某些唯一性特征,采用某些技术对这些特征进行判别,从而对人的身份进行识别。相比于传统的身份验证技术,生物识别技术具有更高的有效性、安全性、可靠性。Today's society is a highly information-based society. On the one hand, people's demand for information is growing. On the other hand, the requirements for information security are getting higher and higher. Traditional authentication technologies include certificates, magnetic cards, passwords, etc., but these authentication techniques are not very secure. Therefore, biometric technology has emerged. Biometric identification refers to the use of certain unique features of the human body to identify these features using certain techniques to identify the identity of the person. Compared with traditional authentication technology, biometric technology has higher effectiveness, security and reliability.
早期的生物识别技术主要包括人脸、指纹、签名等,这些特征存在着很大的易改动性。由于虹膜具有与生俱来、不易丢失、不易受损、易于识别等特性,近年来,虹膜验证得到了学术界和工业界的高度认可与关注。Early biometrics mainly included face, fingerprint, signature, etc. These features are highly variable. Since the iris is inherently, not easy to lose, not easily damaged, and easy to identify, in recent years, iris verification has been highly recognized and paid attention by academics and industry.
虹膜是人眼中位于黑色瞳孔和白色巩膜之间的环状区域,含有很多相互交错的斑点、细丝、冠状、条纹、隐窝等细节特征。在采集虹膜图像时,由于物理结构的限制,通常会将瞳孔、眼睑和睫毛等连同虹膜一起拍摄进来。由于虹膜识别需要的仅仅是介于瞳孔和巩膜之间不被眼皮和睫毛遮挡的区域,而非其它的信息,因而如何对虹膜区域进行定位与分割成为虹膜识别领域一个热点与难点。The iris is an annular region between the black pupil and the white sclera in the human eye. It contains many intertwined spots, filaments, crowns, stripes, crypts and other details. When collecting iris images, due to physical limitations, pupils, eyelids, and eyelashes are usually taken together with the iris. Since iris recognition requires only an area between the pupil and the sclera that is not blocked by the eyelids and eyelashes, and other information, how to locate and segment the iris area becomes a hot spot and a difficult point in the iris recognition field.
经典的虹膜分割方法有:Danugman提出的积分/微分算子和Wildes提出的将边缘检测与Hough变换结合的两部定位算法。The classical iris segmentation methods are: the integral/differential operator proposed by Danugman and the two localization algorithms proposed by Wildes combining edge detection and Hough transform.
关于Danugman提出的积分/微分算子,其优点在于,能够在灰度图像上计算,不用对图像进行预处理,但是,其也存在如下缺点,即, 其速度在对虹膜外圆的圆心及半径的粗定位不够准确时会变得非常慢,除此之外,在获取虹膜图像时形成的光斑对定位精度会有影响,特别是在光照不均匀、有阴影、有反光和有遮挡时定位容易错误,而且,由于需要在三维参数空间中进行搜索、迭代求最优解,所以计算量大、计算速度相对较慢。Regarding the integral/differential operator proposed by Danugman, the advantage is that it can be calculated on the grayscale image without preprocessing the image, but it also has the following disadvantages, namely, The speed becomes very slow when the coarse positioning of the center and radius of the outer circle of the iris is not accurate. In addition, the spot formed when the iris image is acquired has an influence on the positioning accuracy, especially in uneven illumination. Positioning is easy to be mistaken when there are shadows, reflections, and occlusions. Moreover, since it is necessary to search in the three-dimensional parameter space and iterate to find the optimal solution, the calculation amount is large and the calculation speed is relatively slow.
另一方面,关于Wildes提出的将边缘检测与Hough变换结合的两部定位算法,由于Hough变换对图像中的噪声不敏感、鲁棒性高,因而该算法的优点在于,对噪声不敏感,但是,由于Hough变换的计算量大且提取的参数受参数空间的量化间隔制约,所以该算法过分地依赖边缘点检测的准确性,除此之外,由于在搜索圆心和半径时需要在三维参数空间进行投票,所以计算量和存储空间开销较大。On the other hand, the two-part positioning algorithm proposed by Wildes combining edge detection and Hough transform, because Hough transform is insensitive to noise in the image and robust, the advantage of this algorithm is that it is not sensitive to noise, but Because the computation of Hough transform is large and the extracted parameters are restricted by the quantization interval of the parameter space, the algorithm relies excessively on the accuracy of edge point detection. In addition, it needs to be in the three-dimensional parameter space when searching for the center and radius. Votes are made, so the amount of calculation and storage space is large.
由此可见,上述这两种经典的虹膜区域分割方法均不够理想。如何快速且准确地分割出有效的虹膜区域仍然是一个亟待解决的技术课题。It can be seen that the above two classical iris region segmentation methods are not ideal. How to quickly and accurately segment the effective iris area is still a technical issue to be solved.
发明内容Summary of the invention
本发明正是为了解决上述技术课题而完成的,其目的在于提供一种准确的虹膜区域分割方法及装置,以便快速且鲁棒地实现虹膜区域的分割。The present invention has been made to solve the above technical problems, and an object thereof is to provide an accurate iris region segmentation method and apparatus for rapidly and robustly realizing segmentation of an iris region.
为了解决上述课题,与现有技术不同,本发明者们经过锐意研究将广泛应用于人脸建模、人脸定位领域的主动外观模型应用于虹膜区域分割领域,同时充分考虑了上下眼皮对虹膜遮挡的影响,提出了本发明的基于主动外观模型的虹膜区域分割方法及装置。In order to solve the above problems, the present inventors have made intensive research to apply the active appearance model widely used in face modeling and face localization to the field of iris region segmentation, and fully consider the upper and lower eyelids to the iris. Based on the influence of occlusion, the active appearance model-based iris region segmentation method and apparatus of the present invention are proposed.
根据本发明的一个方面,提供了一种虹膜区域分割方法,其特征在于,所述虹膜区域分割方法是基于主动外观模型的虹膜区域分割方法,所述方法包括:主动外观模型建立步骤,利用预先采集的多幅人眼样本图像来建立由人眼形状模型和人眼纹理模型构成的主动外观模型;主动外观模型匹配步骤,对要进行虹膜区域分割的输入人眼图像 和在所述主动外观模型建立步骤中建立的所述主动外观模型进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征点;以及边界拟合步骤,从在所述主动外观模型匹配步骤中获得的多个特征点中选择用于拟合所述输入人眼图像中的各条边界的特征点来进行拟合,以获得分割出的虹膜区域,其中,在所述主动外观模型建立步骤和所述主动外观模型匹配步骤中均利用了相位一致性信息。According to an aspect of the present invention, an iris region segmentation method is provided, wherein the iris region segmentation method is an iris region segmentation method based on an active appearance model, and the method includes: an active appearance model establishing step, using an advance Collecting a plurality of human eye sample images to establish an active appearance model composed of a human eye shape model and a human eye texture model; an active appearance model matching step, an input human eye image for performing iris region segmentation Matching the active appearance model established in the active appearance model establishing step to obtain a plurality of feature points presenting a contour of a human eye in the input human eye image; and a boundary fitting step from the Fitting a feature point for fitting each boundary in the input human eye image among a plurality of feature points obtained in the active appearance model matching step to obtain a segmented iris region, wherein The phase consistency information is utilized in both the active appearance model establishing step and the active appearance model matching step.
根据本发明的虹膜区域分割方法,通过在主动外观模型建立步骤中利用人眼图像的相位一致性信息,从而能够获得瞳孔、虹膜、上下眼皮的纹理特征比通过现有方法获得的纹理特征更加清晰的纹理信息,进而能够建立比现有技术更加准确的主动外观模型,此外,通过在主动外观模型匹配步骤中利用人眼图像的相位一致性信息,从而能够十分准确地呈现人眼轮廓,进而能够实现比现有技术更准确的虹膜区域分割方法。According to the iris region segmentation method of the present invention, by utilizing the phase consistency information of the human eye image in the active appearance model establishing step, the texture features of the pupil, the iris, and the upper and lower eyelids can be obtained more clearly than the texture features obtained by the existing method. The texture information can further establish a more accurate active appearance model than the prior art, and further, by utilizing the phase consistency information of the human eye image in the active appearance model matching step, the contour of the human eye can be presented very accurately, thereby enabling A more accurate iris region segmentation method is achieved than the prior art.
优选地,在所述虹膜区域分割方法中,所述主动外观模型建立步骤包括:样本图像相位一致性信息计算步骤,对所述预先采集的多幅人眼样本图像中的每一幅计算相位一致性信息;以及人眼纹理模型建立步骤,利用计算出的相位一致性信息来建立构成所述主动外观模型的人眼纹理模型。Preferably, in the iris region segmentation method, the active appearance model establishing step includes: a sample image phase consistency information calculating step, and calculating a phase consistent for each of the plurality of pre-acquired human eye sample images And the human eye texture model establishing step, using the calculated phase consistency information to establish a human eye texture model constituting the active appearance model.
根据本发明的虹膜区域分割方法,通过利用计算出的相位一致性信息来辅助对人眼图像进行标记,从而能够获得比通过现有方式建立的纹理模型更加准确的纹理模型。According to the iris region segmentation method of the present invention, by using the calculated phase coincidence information to assist in marking the human eye image, it is possible to obtain a texture model that is more accurate than the texture model established by the existing method.
优选地,在所述虹膜区域分割方法中,所述人眼纹理模型建立步骤包括:形状划分步骤,对所述预先采集的多幅人眼样本图像的均值形状和利用所述计算出的相位一致性信息来对所述多幅人眼样本图像进行标记而获得的多个样本形状分别进行Delaunay三角划分;纹理归一化步骤,通过分片仿射变换来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及主成分分析处理步骤,利用主成分分析法处理所述样本纹理信息,以获得纹 理参数、纹理模型。Preferably, in the iris region segmentation method, the human eye texture model establishing step includes: a shape dividing step of aligning a mean shape of the pre-acquired plurality of human eye sample images with the calculated phase The plurality of sample shapes obtained by marking the plurality of human eye sample images are respectively subjected to Delaunay triangulation; the texture normalization step is performed by the slice affine transformation to calculate the calculated phase consistency information Mapping to the mean shape to obtain normalized sample texture information; and principal component analysis processing steps, using the principal component analysis method to process the sample texture information to obtain a pattern Rational parameters, texture models.
可替换地,在所述虹膜区域分割方法中,所述人眼纹理模型建立步骤包括:纹理归一化步骤,利用基于对应点的图像配准算法来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及主成分分析处理步骤,利用主成分分析法处理所述样本纹理信息,以获得纹理参数、纹理模型。Alternatively, in the iris region segmentation method, the human eye texture model establishing step includes: a texture normalization step of separately using the corresponding point-based image registration algorithm to calculate the calculated phase consistency information Mapping to the mean shape to obtain normalized sample texture information; and principal component analysis processing steps, using the principal component analysis method to process the sample texture information to obtain a texture parameter and a texture model.
优选地,在所述虹膜区域分割方法中,所述主动外观模型匹配步骤包括:输入图像相位一致性信息计算步骤,对所述要进行虹膜区域分割的输入人眼图像计算相位一致性信息;输入图像纹理计算步骤,利用计算出的输入人眼图像的相位一致性信息来计算所述输入人眼图像的纹理;以及外观纹理匹配步骤,将计算出的所述输入人眼图像的纹理与在所述主动外观模型建立步骤中建立的所述主动外观模型的纹理进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征点。Preferably, in the iris region segmentation method, the active appearance model matching step includes: an input image phase consistency information calculation step of calculating phase consistency information for the input human eye image to be subjected to iris region segmentation; An image texture calculation step of calculating a texture of the input human eye image using the calculated phase consistency information of the input human eye image; and an appearance texture matching step of calculating the texture of the input human eye image The textures of the active appearance models established in the active appearance model establishing step are matched to obtain a plurality of feature points that present a contour of a human eye in the input human eye image.
优选地,在所述虹膜区域分割方法中,所述主动外观模型建立步骤还包括:样本图像采集步骤,预先采集不同人的左右两只眼睛的所述多幅人眼样本图像;特征点标定步骤,在所述多幅人眼样本图像上人工地标定特征点;特征点对齐步骤,将所述多幅人眼样本图像中的对应特征点对齐;人眼形状模型建立步骤,利用在所述特征点对齐步骤中进行对齐后的所述多幅人眼样本图像中的特征点来建立构成所述主动外观模型的人眼形状模型;以及合成步骤,将所建立的所述人眼形状模型和所述人眼纹理模型进行结合来获得所述主动外观模型。Preferably, in the iris region segmentation method, the active appearance model establishing step further includes: a sample image collecting step of pre-collecting the plurality of human eye sample images of the left and right eyes of different people; and the feature point calibration step Separating the feature points manually on the plurality of human eye sample images; the feature point alignment step aligning the corresponding feature points in the plurality of human eye sample images; and the human eye shape model establishing step, utilizing the features a feature point in the aligned plurality of human eye sample images in the point alignment step to establish a human eye shape model constituting the active appearance model; and a synthesizing step of the established human eye shape model and the The human eye texture model is combined to obtain the active appearance model.
优选地,在所述虹膜区域分割方法中,在所述特征点对齐步骤中,采用普氏分析来得到去除平移、尺度和旋转的对齐图像。Preferably, in the iris region segmentation method, in the feature point alignment step, Platts analysis is used to obtain an alignment image in which translation, scale, and rotation are removed.
优选地,在所述虹膜区域分割方法中,在所述人眼形状模型建立步骤和所述人眼纹理模型建立步骤中,利用主成分分析来得到所述人眼形状模型和所述人眼纹理模型。Preferably, in the iris region segmentation method, in the human eye shape model establishing step and the human eye texture model establishing step, the human eye shape model and the human eye texture are obtained by principal component analysis. model.
根据本发明的虹膜区域分割方法,为了得到形状模型和纹理模型, 分别使用主成分分析来对数据进行处理,由此,能够减小需要处理的数据量,节省计算时间。According to the iris region segmentation method of the present invention, in order to obtain a shape model and a texture model, Principal component analysis is used to process the data, thereby reducing the amount of data that needs to be processed and saving computation time.
优选地,在所述虹膜区域分割方法中,所述各条边界包括虹膜边界、瞳孔边界、上下眼皮边界。Preferably, in the iris region segmentation method, the strip boundaries include an iris boundary, a pupil boundary, and an upper and lower eyelid boundary.
优选地,在所述虹膜区域分割方法中,在所述边界拟合步骤中拟合所述虹膜边界时,从在所述主动外观模型匹配步骤中获得的多个特征点中选择位于虹膜左侧边界和虹膜右侧边界上的至少一部分特征点来进行拟合。Preferably, in the iris region segmentation method, when the iris boundary is fitted in the boundary fitting step, a plurality of feature points obtained in the active appearance model matching step are selected to be located on the left side of the iris At least a portion of the feature points on the boundary and the right edge of the iris are fitted.
优选地,在所述虹膜区域分割方法中,在所述边界拟合步骤中拟合所述瞳孔边界时,从在所述主动外观模型匹配步骤中获得的多个特征点中选择位于瞳孔边界上的至少一部分特征点来进行拟合。Preferably, in the iris region segmentation method, when the pupil boundary is fitted in the boundary fitting step, selecting a plurality of feature points obtained in the active appearance model matching step is located on a pupil boundary At least a portion of the feature points are fitted to fit.
优选地,在所述虹膜区域分割方法中,在所述边界拟合步骤中拟合所述上下眼皮边界时,从在所述主动外观模型匹配步骤中获得的多个特征点中选择位于上下眼皮边界上的与眼角隔开一定距离的中间部分处的至少一部分特征点来进行拟合。Preferably, in the iris region segmentation method, when the upper and lower eyelid boundaries are fitted in the boundary fitting step, the upper and lower eyelids are selected from a plurality of feature points obtained in the active appearance model matching step. At least a portion of the feature points at the intermediate portion of the boundary that are spaced apart from the corner of the eye are fitted.
根据本发明的另一个方面,提供了一种虹膜区域分割装置,其特征在于,所述虹膜区域分割装置是基于主动外观模型的虹膜区域分割装置,所述装置包括:主动外观模型建立装置,其被配置为利用预先采集的多幅人眼样本图像来建立由人眼形状模型和人眼纹理模型构成的主动外观模型;主动外观模型匹配装置,其被配置为对要进行虹膜区域分割的输入人眼图像和在所述主动外观模型建立装置中建立的所述主动外观模型进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征点;以及边界拟合装置,其被配置为从在所述主动外观模型匹配装置中获得的多个特征点中选择用于拟合所述输入人眼图像中的各条边界的特征点来进行拟合,以获得分割出的虹膜区域,其中,在所述主动外观模型建立装置和所述主动外观模型匹配装置中均利用了相位一致性信息。According to another aspect of the present invention, an iris region segmentation device is provided, wherein the iris region segmentation device is an iris region segmentation device based on an active appearance model, the device comprising: an active appearance model establishing device, An active appearance model composed of a human eye shape model and a human eye texture model is configured to utilize a plurality of pre-acquired human eye sample images; an active appearance model matching device configured to input a person to be subjected to iris region segmentation The eye image is matched with the active appearance model established in the active appearance model building device to obtain a plurality of feature points that present a contour of a human eye in the input human eye image; and a boundary fitting device that is Configuring to select a feature point for fitting each boundary in the input human eye image from a plurality of feature points obtained in the active appearance model matching device to perform fitting to obtain the segmented iris region Wherein the phase 1 is utilized in both the active appearance model establishing device and the active appearance model matching device Information.
根据本发明的虹膜区域分割装置,通过在主动外观模型建立装置 中利用人眼图像的相位一致性信息,从而能够获得瞳孔、虹膜、上下眼皮的纹理特征比通过现有方法获得的纹理特征更加清晰的纹理信息,进而能够建立比现有技术更加准确的主动外观模型,此外,通过在主动外观模型匹配装置中利用人眼图像的相位一致性信息,从而能够十分准确地呈现人眼轮廓,进而能够实现比现有技术更准确的虹膜区域分割装置。An iris region segmentation device according to the present invention, by establishing an apparatus in an active appearance model The phase consistency information of the human eye image is utilized, so that the texture features of the pupil, the iris, and the upper and lower eyelids can be obtained more clearly than the texture features obtained by the existing method, thereby being able to establish a more accurate active appearance than the prior art. In addition, by utilizing the phase consistency information of the human eye image in the active appearance model matching device, the human eye contour can be presented very accurately, and an iris region dividing device more accurate than the prior art can be realized.
优选地,在所述虹膜区域分割装置中,所述主动外观模型建立装置包括:样本图像相位一致性信息计算部,其被配置为对所述预先采集的多幅人眼样本图像中的每一幅计算相位一致性信息;以及人眼纹理模型建立部,其被配置为利用计算出的相位一致性信息来建立构成所述主动外观模型的人眼纹理模型。Preferably, in the iris region segmentation device, the active appearance model establishing device includes: a sample image phase consistency information calculation portion configured to each of the pre-acquired plurality of human eye sample images The amplitude calculation phase consistency information; and a human eye texture model establishing section configured to use the calculated phase consistency information to establish a human eye texture model constituting the active appearance model.
根据本发明的虹膜区域分割装置,通过利用计算出的相位一致性信息来辅助对人眼图像进行标记,从而能够获得比通过现有方式建立的纹理模型更加准确的纹理模型。According to the iris region dividing device of the present invention, by using the calculated phase coincidence information to assist in marking the human eye image, it is possible to obtain a texture model which is more accurate than the texture model established by the existing method.
优选地,在所述虹膜区域分割装置中,所述人眼纹理模型建立部包括:形状划分单元,其被配置为对所述预先采集的多幅人眼样本图像的均值形状和利用所述计算出的相位一致性信息来对所述多幅人眼样本图像进行标记而获得的多个样本形状分别进行Delaunay三角划分;纹理归一化单元,其被配置为通过分片仿射变换来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及主成分分析处理单元,其被配置为利用主成分分析法处理所述样本纹理信息,以获得纹理参数、纹理模型。Preferably, in the iris region segmentation device, the human eye texture model establishing portion includes: a shape dividing unit configured to calculate an average shape of the pre-acquired plurality of human eye sample images and utilize the calculation The plurality of sample shapes obtained by marking the plurality of human eye sample images are respectively subjected to Delaunay triangulation; the texture normalization unit is configured to pass the slice affine transformation The calculated phase consistency information is respectively mapped to the mean shape to obtain normalized sample texture information; and a principal component analysis processing unit configured to process the sample texture information by principal component analysis, Get texture parameters, texture models.
可替换地,在所述虹膜区域分割装置中,所述人眼纹理模型建立部包括:纹理归一化单元,其被配置为利用基于对应点的图像配准算法来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及主成分分析处理单元,其被配置为利用主成分分析法处理所述样本纹理信息,以获得纹理参数、纹理模型。 Alternatively, in the iris region segmentation device, the human eye texture model establishing portion includes: a texture normalization unit configured to utilize the corresponding point-based image registration algorithm to calculate the calculated phase Consistency information is respectively mapped to the mean shape to obtain normalized sample texture information; and a principal component analysis processing unit configured to process the sample texture information by using principal component analysis to obtain texture parameters, Texture model.
优选地,在所述虹膜区域分割装置中,所述主动外观模型匹配装置包括:输入图像相位一致性信息计算部,其被配置为对所述要进行虹膜区域分割的输入人眼图像计算相位一致性信息;输入图像纹理计算部,其被配置为利用计算出的输入人眼图像的相位一致性信息来计算所述输入人眼图像的纹理;以及外观纹理匹配部,其被配置为将计算出的所述输入人眼图像的纹理与在所述主动外观模型建立装置中建立的所述主动外观模型的纹理进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征点。Preferably, in the iris region segmentation device, the active appearance model matching device includes: an input image phase consistency information calculation portion configured to calculate a phase coincidence for the input human eye image to be subjected to iris region segmentation Information information; an input image texture calculation section configured to calculate a texture of the input human eye image using the calculated phase consistency information of the input human eye image; and an appearance texture matching section configured to calculate Matching the texture of the input human eye image with the texture of the active appearance model established in the active appearance model building device to obtain a plurality of feature points presenting a contour of a human eye in the input human eye image .
优选地,在所述虹膜区域分割装置中,所述主动外观模型建立装置还包括:样本图像采集部,其被配置为预先采集不同人的左右两只眼睛的所述多幅人眼样本图像;特征点标定部,其被配置为在所述多幅人眼样本图像上人工地标定特征点;特征点对齐部,其被配置为将所述多幅人眼样本图像中的对应特征点对齐;人眼形状模型建立部,其被配置为利用在所述特征点对齐部中进行对齐后的所述多幅人眼样本图像中的特征点来建立构成所述主动外观模型的人眼形状模型;以及合成部,其被配置为将所建立的所述人眼形状模型和所述人眼纹理模型进行结合来获得所述主动外观模型。Preferably, in the iris region segmentation device, the active appearance model establishing device further includes: a sample image collecting portion configured to pre-collect the plurality of human eye sample images of the left and right eyes of different people; a feature point calibration portion configured to manually calibrate feature points on the plurality of human eye sample images; a feature point alignment portion configured to align corresponding feature points in the plurality of human eye sample images; a human eye shape model establishing portion configured to establish a human eye shape model constituting the active appearance model by using feature points in the plurality of human eye sample images aligned in the feature point alignment portion; And a synthesizing portion configured to combine the established human eye shape model and the human eye texture model to obtain the active appearance model.
优选地,在所述虹膜区域分割装置中,所述特征点对齐部被配置为采用普氏分析来得到去除平移、尺度和旋转的对齐图像。Preferably, in the iris area segmentation device, the feature point alignment portion is configured to use a Platts analysis to obtain an aligned image that removes translation, scale, and rotation.
优选地,在所述虹膜区域分割装置中,所述人眼形状模型建立部和所述人眼纹理模型建立部被配置为利用主成分分析来得到所述人眼形状模型和所述人眼纹理模型。Preferably, in the iris region segmentation device, the human eye shape model establishing portion and the human eye texture model establishing portion are configured to obtain the human eye shape model and the human eye texture by principal component analysis. model.
根据本发明的虹膜区域分割装置,为了得到形状模型和纹理模型,分别使用主成分分析来对数据进行处理,由此,能够减小需要处理的数据量,节省计算时间。According to the iris region dividing device of the present invention, in order to obtain the shape model and the texture model, the data is processed using principal component analysis, whereby the amount of data to be processed can be reduced, and the calculation time can be saved.
优选地,在所述虹膜区域分割装置中,所述各条边界包括虹膜边界、瞳孔边界、上下眼皮边界。Preferably, in the iris region segmentation device, the strip boundaries include an iris boundary, a pupil boundary, and an upper and lower eyelid boundary.
优选地,在所述虹膜区域分割装置中,在所述边界拟合装置中拟 合所述虹膜边界时,从通过所述主动外观模型匹配装置获得的多个特征点中选择位于虹膜左侧边界和虹膜右侧边界上的至少一部分特征点来进行拟合。Preferably, in the iris region segmentation device, in the boundary fitting device When the iris boundary is combined, at least a part of feature points located on the left side boundary of the iris and the right side boundary of the iris are selected from a plurality of feature points obtained by the active appearance model matching device to perform fitting.
优选地,在所述虹膜区域分割装置中,在所述边界拟合装置中拟合所述瞳孔边界时,从通过所述主动外观模型匹配装置获得的多个特征点中选择位于瞳孔边界上的至少一部分特征点来进行拟合。Preferably, in the iris region segmentation device, when the pupil boundary is fitted in the boundary fitting device, a plurality of feature points obtained by the active appearance model matching device are selected from a boundary of the pupil At least a portion of the feature points are fitted.
优选地,在所述虹膜区域分割装置中,在所述边界拟合装置中拟合所述上下眼皮边界时,从通过所述主动外观模型匹配装置获得的多个特征点中选择位于上下眼皮边界上的与眼角隔开一定距离的中间部分处的至少一部分特征点来进行拟合。Preferably, in the iris region dividing device, when the upper and lower eyelid boundaries are fitted in the boundary fitting device, the upper and lower eyelid boundaries are selected from a plurality of feature points obtained by the active appearance model matching device At least a portion of the feature points at the intermediate portion spaced from the corner of the eye are fitted to fit.
附图说明DRAWINGS
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects, and advantages of the present invention will become more apparent from the Detailed Description of Description
图1是根据本发明的一个实施例的虹膜区域分割方法100的流程图;1 is a flow chart of an iris region segmentation method 100 in accordance with one embodiment of the present invention;
图2是根据本发明的一个实施例的人眼主动外观模型的建立步骤S101的实施流程的图;2 is a diagram showing an implementation flow of a step S101 of establishing a human eye active appearance model according to an embodiment of the present invention;
图3是人眼的示例采集图像;Figure 3 is an example acquisition image of the human eye;
图4是所选取的人眼特征点的示意图;Figure 4 is a schematic view of selected human eye feature points;
图5是示出对示例采集图像计算出的相位一致性信息的图;FIG. 5 is a diagram showing phase consistency information calculated for an example captured image; FIG.
图6是根据本发明的一个实施例的人眼纹理模型的建立步骤S1015的实施流程的图;6 is a diagram showing an implementation flow of a step S1015 of establishing a human eye texture model according to an embodiment of the present invention;
图7是对一个点集进行Delaunay三角划分的示意图;Figure 7 is a schematic diagram of Delaunay triangulation of a point set;
图8是分片线性仿射的示意图;Figure 8 is a schematic diagram of a linear affine of a slice;
图9是根据本发明的替换实施例的人眼纹理模型的建立步骤S1015’的实施流程的图;Figure 9 is a diagram showing an implementation flow of a step S1015' of establishing a human eye texture model according to an alternative embodiment of the present invention;
图10是根据本发明的一个实施例的主动外观模型与新的人眼图像 的匹配步骤S102的实施流程的图;Figure 10 is an active appearance model and a new human eye image in accordance with one embodiment of the present invention. a diagram matching the implementation flow of step S102;
图11是示出要进行虹膜区域分割的新的人眼图像与主动外观模型进行匹配后而获得的反映人眼轮廓的一系列特征点的图;11 is a view showing a series of feature points reflecting a contour of a human eye obtained by matching a new human eye image to an active appearance model to be subjected to iris region segmentation;
图12是示出拟合结果的图;Figure 12 is a diagram showing the result of fitting;
图13是根据本发明的一个实施例的虹膜区域分割装置1300的框图;Figure 13 is a block diagram of an iris region segmentation device 1300, in accordance with one embodiment of the present invention;
图14是根据本发明的替换实施例的虹膜区域分割装置1400的框图。14 is a block diagram of an iris region segmentation device 1400 in accordance with an alternate embodiment of the present invention.
附图中相同或相似的附图标记代表相同或相似的要素。The same or similar reference numerals in the drawings denote the same or similar elements.
具体实施方式detailed description
以下将参照附图来更充分地描述本发明的实施例,在附图中示出了本发明的实施例。然而,可以用很多不同形式来实施本发明,并且本发明不应当被理解为受限于在此所阐述的实施例。Embodiments of the present invention will be described more fully hereinafter with reference to the accompanying drawings in which FIG. However, the present invention may be embodied in many different forms and the invention should not be construed as being limited to the embodiments set forth herein.
下面结合附图对本发明作进一步详细描述。The invention is further described in detail below with reference to the accompanying drawings.
图1是根据本发明的一个实施例的虹膜区域分割方法100的流程图。1 is a flow chart of an iris region segmentation method 100 in accordance with one embodiment of the present invention.
如图1所示,首先,在步骤S101中,建立人眼的主动外观模型。主动外观模型用于边界检定和图像分割,其是通过利用图像的形状信息和纹理信息来建立形状模型和纹理模型、然后将二者结合在一起而形成的。其目的是由一个事先训练好的模型获取目标区域的形状、仿射变换系数等。As shown in FIG. 1, first, in step S101, an active appearance model of the human eye is established. The active appearance model is used for boundary verification and image segmentation, which is formed by using shape information and texture information of an image to create a shape model and a texture model, and then combining the two. The purpose is to obtain the shape of the target area, the affine transformation coefficient, and the like from a pre-trained model.
下面以一个示例来说明如何建立人眼的主动外观模型。The following is an example to illustrate how to build an active appearance model of the human eye.
图2是根据本发明的一个实施例的人眼主动外观模型的建立步骤S101的实施流程的图。2 is a diagram showing an implementation flow of a step S101 of establishing a human eye active appearance model according to an embodiment of the present invention.
首先,对人眼样本图像进行采集并且对特征点进行标定(步骤S1011)。具体地,采集不同人的左右两只眼睛的清晰图像I。图3中所示出的图像为所采集的一幅清晰图像I。在采集了N幅这样的清晰图 像I之后,在每幅清晰图像I上人工地标定n个特征点{(xi,yi),i=1,…,n}。在标定n个特征点时,选取纹理特征变化明显处(例如,上下眼皮边界、虹膜边界、瞳孔边界等)的特征点。其中,需要注意的是,由于眼睑的遮挡,虹膜的上下边界可能不存在,因而在选取虹膜边界的特征点时,代替选取虹膜圆形边界上的全部特征点,而仅选取虹膜左右两侧未被眼睑遮挡的部分的特征点。First, the human eye sample image is acquired and the feature points are calibrated (step S1011). Specifically, a clear image I of the left and right eyes of different people is collected. The image shown in Figure 3 is a sharp image I that was acquired. After collecting N such clear images I, n feature points {(x i , y i ), i = 1, ..., n} are manually calibrated on each of the sharp images I. When n feature points are calibrated, feature points whose texture features change significantly (for example, upper and lower eyelid boundaries, iris boundaries, pupil boundaries, etc.) are selected. Among them, it should be noted that due to the occlusion of the eyelid, the upper and lower boundaries of the iris may not exist. Therefore, when selecting the feature points of the iris boundary, instead of selecting all the feature points on the circular boundary of the iris, only the left and right sides of the iris are selected. The feature points of the part that is blocked by the eyelids.
图4是所选取的人眼特征点的示意图。由于物理结构的限制,在采集虹膜图像时,可能会采集到瞳孔、睫毛等处的点,因此,为了避免瞳孔、睫毛等的影响,在本实施例中选取了共计68个特征点。在图4中示出了所选取的68个特征点的位置,其中,在上眼皮边界处选取了特征点19至36,共计18个点,在下眼皮边界处选取了特征点1至18,共计18个点,在瞳孔边界处选取了特征点57至68,共计12个点,在未被眼睑遮挡的虹膜左侧边界处和虹膜右侧边界处各选取了10个特征点,即左侧的特征点52至56、37至41和右侧的特征点42至51。Figure 4 is a schematic illustration of selected human eye feature points. Due to the limitation of the physical structure, points at the pupil, eyelashes, and the like may be collected when the iris image is acquired. Therefore, in order to avoid the influence of the pupil, the eyelash, and the like, a total of 68 feature points are selected in the present embodiment. The position of the selected 68 feature points is shown in FIG. 4, wherein feature points 19 to 36 are selected at the upper eyelid boundary, a total of 18 points, and feature points 1 to 18 are selected at the lower eyelid boundary. At 18 points, feature points 57 to 68 were selected at the pupil boundary, a total of 12 points, and 10 feature points were selected at the left boundary of the iris not blocked by the eyelid and the right boundary of the iris, ie, the left side Feature points 52 to 56, 37 to 41 and feature points 42 to 51 on the right side.
在对特征点进行标定之后,分别对上述N幅清晰图像I计算人眼图像的相位一致性信息(步骤S1012)。由于人眼图像主要是根据该图像中的诸如阶跃边缘、零点交叉边缘之类的低级特征来进行理解的,所以,与现有技术不同,在本发明的主动外观模型的建立过程中使用了有助于提高边缘检测的空间分辨力的相位一致性信息。这是一种利用频域空间进行边缘检测和纹理分析的方法。相位一致性是指图像的各个位置上各个频率成分的相位相似度的一种度量方式,它是一个无量纲的量,其值从1降到0,表示从显著特征降到无特征。利用相位一致性信息来对图像进行检测能够提取图像的纹理特征,而不仅仅是边沿部分,另外由于相位一致性信息对图像的亮度、对比度不敏感,因此利用这种信息还可以克服光线明暗所带来的纹理结构方面的影响。人眼清晰图像I在特征点x处的相位一致性信息可通过下式(1)来进行计算: After the feature points are calibrated, the phase coincidence information of the human eye image is calculated for each of the N clear images I (step S1012). Since the human eye image is mainly understood based on low-level features such as step edges and zero-crossing edges in the image, unlike the prior art, the active appearance model of the present invention is used in the process of establishing the active appearance model. Phase consistency information that contributes to the spatial resolution of edge detection. This is a method of edge detection and texture analysis using frequency domain space. Phase consistency refers to a measure of the phase similarity of each frequency component at each position of the image. It is a dimensionless quantity whose value decreases from 1 to 0, indicating a decrease from a salient feature to a no feature. Using the phase consistency information to detect the image can extract the texture features of the image, not just the edge portion. In addition, since the phase consistency information is not sensitive to the brightness and contrast of the image, the information can also be used to overcome the light and darkness. The impact of the texture structure. The phase consistency information of the human eye clear image I at the feature point x can be calculated by the following formula (1):
Figure PCTCN2015000940-appb-000001
Figure PCTCN2015000940-appb-000001
其中,ε是一个小的正的常数,例如,可以被设为0.01,θj=jπ/J,j={0,…,J-1}是滤波器的方向角,J是方向数,n是在每幅图像上人工地标定的特征点的个数,
Figure PCTCN2015000940-appb-000002
Figure PCTCN2015000940-appb-000003
分别为沿θj方向的局部振幅和局部能量,分别通过下式(2)和下式(3)来进行计算:
Where ε is a small positive constant, for example, can be set to 0.01, θ j = jπ / J, j = {0, ..., J-1} is the direction angle of the filter, J is the direction number, n Is the number of feature points manually calibrated on each image,
Figure PCTCN2015000940-appb-000002
with
Figure PCTCN2015000940-appb-000003
The local amplitude and local energy along the θ j direction are respectively calculated by the following equations (2) and (3):
Figure PCTCN2015000940-appb-000004
Figure PCTCN2015000940-appb-000004
Figure PCTCN2015000940-appb-000005
Figure PCTCN2015000940-appb-000005
其中
Figure PCTCN2015000940-appb-000006
Figure PCTCN2015000940-appb-000007
Figure PCTCN2015000940-appb-000008
分别为人眼清晰图像I与二维log-Gabor滤波器进行卷积后在每个特征点x处沿θj方向的响应。二维log-Gabor滤波器在频域的传递函数定义如下:
among them
Figure PCTCN2015000940-appb-000006
Figure PCTCN2015000940-appb-000007
with
Figure PCTCN2015000940-appb-000008
The response in the θ j direction at each feature point x after convolution of the human eye clear image I and the two-dimensional log-Gabor filter, respectively. The transfer function of the two-dimensional log-Gabor filter in the frequency domain is defined as follows:
Figure PCTCN2015000940-appb-000009
Figure PCTCN2015000940-appb-000009
其中ω0是滤波器的中心频率,σr是滤波器的带宽,σθ是滤波器的角带宽。Where ω 0 is the center frequency of the filter, σ r is the bandwidth of the filter, and σ θ is the angular bandwidth of the filter.
通过上述公式对标定特征点后的图3所示的图像计算相位一致性信息,其结果在图5中示出。根据图5可知,利用相位一致性信息,能够获得上下眼皮、瞳孔、虹膜的纹理特征非常清晰的眼睛轮廓图像。The phase coincidence information is calculated by the above formula for the image shown in Fig. 3 after the calibration feature point, and the result is shown in Fig. 5. As can be seen from Fig. 5, by using the phase coincidence information, it is possible to obtain an eye contour image in which the texture features of the upper and lower eyelids, the pupil, and the iris are very clear.
在对上述N幅清晰图像I计算人眼图像的相位一致性信息之后,将上述N幅清晰图像I中的对应特征点对齐(步骤S1013)。具体地,对标定特征点后的N幅清晰图像I进行普氏分析,其中,分别计算这N 幅清晰图像I的形状的重心即形状重心,将这N个形状重心移到同一位置,然后将这N幅清晰图像I的形状通过放大或缩小伸缩到一样大小,最后通过两幅清晰图像I的形状的对应点的位置来计算出旋转角度的差别,然后旋转对象,使得清晰图像I的形状的角度一致。像这样,将不同图像的对应特征点对齐,得到去除平移、尺度和旋转的对齐的人眼图像。After the phase coincidence information of the human eye image is calculated for the above-described N clear images I, the corresponding feature points in the above-described N sharp images I are aligned (step S1013). Specifically, performing a Platts analysis on the N clear images I after the calibration feature points, wherein the N is calculated separately The center of gravity of the shape of the sharp image I is the center of gravity of the shape, the center of gravity of the N shapes is moved to the same position, and then the shape of the N clear images I is expanded or reduced to the same size, and finally passed through two clear images I The position of the corresponding point of the shape is used to calculate the difference in the rotation angle, and then the object is rotated so that the angle of the shape of the clear image I is uniform. As such, the corresponding feature points of the different images are aligned to obtain a human eye image that removes the alignment of the translation, scale, and rotation.
在通过上述步骤S1013对所采集的N幅清晰图像I中的特征点进行对齐之后,建立构成主动外观模型的形状模型(步骤S1014)。After the feature points in the acquired N sharp images I are aligned by the above-described step S1013, a shape model constituting the active appearance model is established (step S1014).
具体地,首先,将每幅图像已经对齐后的n个特征点相连接来组成形状向量si,利用下式(5)将N幅清晰图像I拼接成N×2n的人眼形状矩阵s:Specifically, first, the n feature points after each image have been aligned are connected to form a shape vector s i , and the N clear images I are spliced into an N×2n human eye shape matrix s by using the following formula (5):
Figure PCTCN2015000940-appb-000010
Figure PCTCN2015000940-appb-000010
其中,si=(x1i,…,xni,y1i,…,yni)T。接着,利用下式(6)对拼接后的人眼形状矩阵s求平均,得到人眼平均矩阵
Figure PCTCN2015000940-appb-000011
Where s i = (x 1i , ..., x ni , y 1i , ..., y ni ) T . Next, the spliced human eye shape matrix s is averaged by using the following formula (6) to obtain an eye average matrix.
Figure PCTCN2015000940-appb-000011
Figure PCTCN2015000940-appb-000012
Figure PCTCN2015000940-appb-000012
之后,对上述拼接后的人眼形状矩阵s与人眼平均矩阵
Figure PCTCN2015000940-appb-000013
进行减法运算,得到差值矩阵D={dij,i=1,…,2n,j=1,…,N},其中,
Figure PCTCN2015000940-appb-000014
i=1,…,2n,j=1,…,N,然后,利用下式(7)计算差值矩阵D的协方差矩阵U:
After that, the spliced human eye shape matrix s and the human eye average matrix
Figure PCTCN2015000940-appb-000013
Performing a subtraction operation to obtain a difference matrix D={d ij , i=1, . . . , 2n, j=1, . . . , N}, where
Figure PCTCN2015000940-appb-000014
i=1,...,2n,j=1,...,N, then, the covariance matrix U of the difference matrix D is calculated using the following equation (7):
U=DDT  (7)U=DD T (7)
在得到协方差矩阵U之后,计算该协方差矩阵U的特征值和特征向量, 然后,将特征值按照从大到小的顺序进行排序,取前k个最大的特征值所对应的特征向量,使这前k个特征值的能量占总能量的95%以上。将这些对应的特征向量组成主成分分析(PCA:Principal Component Analysis)投影矩阵Φs,得到由下式(8)给定的形状模型:After obtaining the covariance matrix U, the eigenvalues and eigenvectors of the covariance matrix U are calculated, and then the eigenvalues are sorted in descending order, and the eigenvectors corresponding to the first k largest eigenvalues are taken. The energy of the first k eigenvalues is made more than 95% of the total energy. These corresponding feature vectors are composed into a principal component analysis (PCA: Principal Component Analysis) projection matrix Φ s to obtain a shape model given by the following equation (8):
Figure PCTCN2015000940-appb-000015
Figure PCTCN2015000940-appb-000015
其中,
Figure PCTCN2015000940-appb-000016
是均值形状,Φs是利用主成分分析得到的形状主成分特征向量所形成的变换矩阵,bs是控制形状变化的统计形状参数。在上式(8)所示的形状模型中,以均值形状
Figure PCTCN2015000940-appb-000017
为基础,可以通过调节统计形状参数bs来获得一个新的形状模型。
among them,
Figure PCTCN2015000940-appb-000016
It is a mean shape, Φ s is a transformation matrix formed by principal component eigenvectors obtained by principal component analysis, and b s is a statistical shape parameter that controls shape change. In the shape model shown in the above formula (8), in the mean shape
Figure PCTCN2015000940-appb-000017
Based on this, a new shape model can be obtained by adjusting the statistical shape parameter b s .
在建立人眼的形状模型之后,利用计算出的相位一致性信息来建立构成主动外观模型的纹理模型(步骤S1015)。After the shape model of the human eye is established, the calculated phase consistency information is used to establish a texture model constituting the active appearance model (step S1015).
具体地,在图6中示出了根据本发明的一个实施例的人眼纹理模型的建立步骤S1015的实施流程。Specifically, an implementation flow of the establishing step S1015 of the human eye texture model according to an embodiment of the present invention is shown in FIG.
首先,在步骤S1015a中,对上述均值形状
Figure PCTCN2015000940-appb-000018
和利用所计算出的上述N幅清晰图像I的相位一致性信息来对这N幅清晰图像I进行标记而获得的、均由一系列特征点表征的N个样本形状分别进行Delaunay三角划分。所谓的Delaunay三角划分是将空间点连接为三角形以便将所有三角形中的最小角最大化的技术。Delaunay三角划分的要点是任何三角形的外接圆都不包括任何其它顶点。图7是对一个点集进行Delaunay三角划分的示意图。图7所示的Delaunay三角划分的一种方法的过程如下:
First, in step S1015a, the above mean shape
Figure PCTCN2015000940-appb-000018
The N sample shapes each characterized by a series of feature points obtained by marking the N clear images I using the calculated phase coincidence information of the N clear images I are respectively subjected to Delaunay triangulation. The so-called Delaunay triangulation is a technique of joining spatial points into triangles to maximize the minimum angle of all triangles. The point of the Delaunay triangulation is that the circumscribed circle of any triangle does not include any other vertices. Figure 7 is a schematic diagram of Delaunay triangulation for a set of points. The process of a method of Delaunay triangulation shown in Figure 7 is as follows:
1)选定该点集内的任意一点,接着选定离该任意一点距离最近的另外一点,然后连接这两个点来作为定向基线;1) Select any point in the set of points, then select another point that is closest to the point, and then connect the two points as a directional baseline;
2)应用Delaunay判别标准来搜索位于该定向基线右侧的第三个点; 2) Apply the Delaunay criterion to search for the third point to the right of the directional baseline;
3)创建Delaunay三角形,然后把所生成的三角形中的、方向被指定为从基线的起点指向第三个点和从第三个点指向基线的终点的两条边作为新的基线;3) Create a Delaunay triangle, and then specify the direction in the generated triangle as the new baseline from the starting point of the baseline to the third point and the two points from the third point to the end point of the baseline;
4)重复上述2)和3),直到所有的基线被用过为止。4) Repeat steps 2) and 3) above until all baselines have been used.
利用上述这样的Delaunay三角划分的过程来对上述均值形状
Figure PCTCN2015000940-appb-000019
和上述N个样本形状分别进行Delaunay三角划分,由此,上述均值形状
Figure PCTCN2015000940-appb-000020
和上述N个样本形状均被划分为一系列三角形。
Using the above-described Delaunay triangulation process to apply the above mean shape
Figure PCTCN2015000940-appb-000019
Delaunay triangulation is performed separately from the above N sample shapes, thereby obtaining the above mean shape
Figure PCTCN2015000940-appb-000020
And the above N sample shapes are divided into a series of triangles.
接着,在步骤S1015b中,通过分片仿射变换来将所采集的N幅清晰图像I的相位一致性信息映射到上述均值形状
Figure PCTCN2015000940-appb-000021
实现对纹理的归一化。由于上述均值形状
Figure PCTCN2015000940-appb-000022
和上述N个样本形状经过Delaunay三角划分后得到的三角形是相互对应的,所以,可以根据样本形状的三角形中的每一点的位置,通过分片线性仿射投影,计算其在对应的均值形状
Figure PCTCN2015000940-appb-000023
的三角形中的位置,然后把该点的相位一致性的值映射到均值形状
Figure PCTCN2015000940-appb-000024
中的对应点的位置上。
Next, in step S1015b, phase coincidence information of the acquired N sharp images I is mapped to the mean shape by slice affine transformation
Figure PCTCN2015000940-appb-000021
Achieve normalization of textures. Due to the above mean shape
Figure PCTCN2015000940-appb-000022
The triangles obtained by dividing the above N sample shapes by the Delaunay triangle are mutually corresponding, so that the corresponding mean shape can be calculated by the piecewise linear affine projection according to the position of each point in the triangle of the sample shape.
Figure PCTCN2015000940-appb-000023
Position in the triangle, then map the value of the phase consistency of the point to the mean shape
Figure PCTCN2015000940-appb-000024
The position of the corresponding point in .
图8是分片线性仿射的示意图。如图8所示,左侧的三角形和右侧的三角形分别代表上述样本形状和上述均值形状经过Delaunay三角划分后得到的三角形。两个三角形的顶点v1,v2,v3和v’1,v’2,v’3的位置和对应关系是已知的。对于样本形状的三角形内的一点p(p点坐标是已知的),可以利用基于重心坐标的线性仿射变换来得到其在均值形状的三角形内的对应点p’的位置,并完成对应点的相位一致性信息(即,纹理信息)的映射。Figure 8 is a schematic illustration of the linear affine of the slice. As shown in FIG. 8, the triangle on the left side and the triangle on the right side respectively represent the triangle shape obtained by dividing the sample shape and the above-described mean shape by the Delaunay triangle. The positions and correspondences of the vertices v 1 , v 2 , v 3 and v' 1 , v' 2 , v' 3 of the two triangles are known. For a point p in the triangle of the sample shape (the p-point coordinates are known), the linear affine transformation based on the barycentric coordinates can be used to obtain the position of the corresponding point p' in the triangle of the mean shape, and the corresponding point is completed. Mapping of phase consistency information (ie, texture information).
上述N幅清晰图像I中的每一幅图像的相位一致性信息(即,纹理信息)都能够通过上述方法映射到上述均值形状
Figure PCTCN2015000940-appb-000025
上,由此,实现对 纹理的归一化,即,通过分片线性仿射变换,把上述N个样本形状的相位一致性信息(即,纹理信息)映射到均值形状这个统一的参照系中,以用于下一步的纹理模型的建立。
The phase consistency information (ie, texture information) of each of the above N sharp images I can be mapped to the above average shape by the above method.
Figure PCTCN2015000940-appb-000025
Upper, thereby, normalizing the texture, that is, mapping the phase consistency information (ie, texture information) of the N sample shapes to the uniform reference system by the slice linear affine transformation , for the creation of the texture model for the next step.
接下来,在步骤S1015c中,利用主成分分析法处理所有归一化后的样本纹理信息来获得纹理参数,进而获得纹理模型。具体地,首先,对所有归一化后的样本纹理信息求平均,从而得到平均纹理
Figure PCTCN2015000940-appb-000026
接着,利用与上述步骤S1014类似的方法进行主成分分析,得到按特征值的大小进行排序后的前m个特征值所对应的特征向量。然后,将这些对应的特征向量组成主成分分析投影矩阵Φg,得到由下式(9)给出的纹理模型:
Next, in step S1015c, all normalized sample texture information is processed by principal component analysis to obtain texture parameters, thereby obtaining a texture model. Specifically, first, averaging all normalized sample texture information to obtain an average texture
Figure PCTCN2015000940-appb-000026
Next, principal component analysis is performed by a method similar to the above-described step S1014, and feature vectors corresponding to the first m feature values sorted by the size of the feature values are obtained. Then, these corresponding feature vectors are composed into a principal component analysis projection matrix Φ g to obtain a texture model given by the following formula (9):
Figure PCTCN2015000940-appb-000027
Figure PCTCN2015000940-appb-000027
其中,
Figure PCTCN2015000940-appb-000028
是平均纹理,Φg是利用主成分分析得到的纹理主成分特征向量所形成的变换矩阵,bg是控制纹理变化的统计纹理参数。在上式(9)所示的纹理模型中,以平均纹理
Figure PCTCN2015000940-appb-000029
为基础,可以通过调节统计纹理参数bg来获得一个新的纹理模型。
among them,
Figure PCTCN2015000940-appb-000028
It is the average texture, Φ g is the transformation matrix formed by the texture principal component eigenvector obtained by principal component analysis, and b g is the statistical texture parameter that controls the texture change. In the texture model shown in the above formula (9), the average texture
Figure PCTCN2015000940-appb-000029
Based on this, a new texture model can be obtained by adjusting the statistical texture parameter b g .
需要注意的是,图6中所示出的人眼纹理模型的建立步骤S1015的实施流程只不过是一个示例,可以对其进行各种变形来实现相同的效果。例如,可以不进行图6中的Delaunay三角划分和分片仿射变换,而是利用基于对应点的图像配准算法来将所采集的N幅清晰图像I的相位一致性信息映射到上述均值形状
Figure PCTCN2015000940-appb-000030
实现对纹理的归一化。在图9中示出了该替换的实施流程。具体地,首先,利用诸如基于薄板样条函数的图像配准算法之类的基于对应点的图像配准算法来将所采集的N幅清晰图像I的相位一致性信息映射到上述均值形状
Figure PCTCN2015000940-appb-000031
实现对纹理 的归一化,其中,所述基于对应点的图像配准算法的基本思想是通过计算最佳空间变换来将在不同条件下获取的或者由不同成像设备获取的两幅或多幅图像中对应的特征点的位置一一对应(步骤S1015’a),然后,与图6同样地,利用主成分分析法处理所有归一化后的样本纹理信息来获得纹理参数,进而获得纹理模型(步骤S1015c)。
It should be noted that the implementation flow of the step S1015 of the human eye texture model shown in FIG. 6 is merely an example, and various modifications can be made to achieve the same effect. For example, instead of performing the Delaunay triangulation and the slice affine transformation in FIG. 6, the image registration algorithm based on the corresponding points may be used to map the phase consistency information of the acquired N sharp images I to the above average shape.
Figure PCTCN2015000940-appb-000030
Achieve normalization of textures. The implementation flow of this alternative is shown in FIG. Specifically, first, phase matching information of the acquired N sharp images I is mapped to the above average shape using a corresponding point-based image registration algorithm such as an image registration algorithm based on a thin plate spline function.
Figure PCTCN2015000940-appb-000031
Realizing normalization of textures, wherein the basic idea of the image registration algorithm based on corresponding points is to calculate two or more images acquired under different conditions or acquired by different imaging devices by calculating an optimal spatial transformation The positions of the corresponding feature points in the image are in one-to-one correspondence (step S1015'a), and then, similarly to FIG. 6, all the normalized sample texture information is processed by principal component analysis to obtain texture parameters, thereby obtaining a texture model. (Step S1015c).
关于以上描述的步骤,需要注意的是,虽然在图2中示出了在对N幅清晰图像I计算人眼图像的相位一致性信息(步骤S1012)之后,对上述N幅清晰图像I中的特征点进行对齐(步骤S1013)以及建立构成主动外观模型的形状模型(步骤S1014),但是,这些步骤彼此之间的顺序并不仅限于此。只要满足上述步骤S1012和上述步骤S1014是在建立纹理模型(步骤S1015/步骤S1015’)之前执行以及上述步骤S1013是在上述步骤S1014之前执行这两个条件,那么这些步骤的前后顺序是可以任意调换的,或者可以同时执行。例如,可以同时执行上述步骤S1012和上述步骤S1013,然后依次执行上述步骤S1014和上述步骤S1015/步骤S1015’;或者,可以在执行上述步骤S1013之后执行上述步骤S1012,然后依次执行上述步骤S1014和上述步骤S1015/步骤S1015’;又或者,可以在依次执行上述步骤S1013和上述步骤S1014之后,执行上述步骤S1012,然后执行上述步骤S1015/步骤S1015’等等。Regarding the steps described above, it should be noted that although the phase consistency information of the human eye image is calculated for the N sharp images I (step S1012), it is shown in FIG. 2 for the above-described N sharp images I. The feature points are aligned (step S1013) and the shape models constituting the active appearance model are established (step S1014), but the order of the steps between them is not limited thereto. As long as the above-mentioned step S1012 and the above-described step S1014 are performed before the texture model is established (step S1015/step S1015') and the above-described step S1013 is performed before the above-mentioned step S1014, the order of the steps can be arbitrarily changed. Or can be executed at the same time. For example, the above step S1012 and the above step S1013 may be simultaneously performed, and then the above step S1014 and the above step S1015/step S1015' may be sequentially performed; or, the above step S1012 may be performed after the above step S1013 is performed, and then the above step S1014 and the above are sequentially performed. Step S1015 / Step S1015'; Alternatively, after the above-described step S1013 and the above-described step S1014 are sequentially performed, the above-described step S1012 may be performed, and then the above-described step S1015 / step S1015' and the like are performed.
最后,在建立形状模型和纹理模型之后,将这两个模型结合成主动外观模型(步骤S1016)。具体地,首先,将bs和bg按照下式(10)连接起来得到外观特征向量b:Finally, after the shape model and the texture model are created, the two models are combined into an active appearance model (step S1016). Specifically, first, b s and b g are connected according to the following formula (10) to obtain an appearance feature vector b:
Figure PCTCN2015000940-appb-000032
Figure PCTCN2015000940-appb-000032
其中,ws是用来调整bs和bg之间的量纲差异的对角矩阵。接着,对得到的外观特征向量b进行主成分分析,以进一步消除形状和纹理之间的 相关性,从而得到由下式(11)给出的主动外观模型:Wherein, w s is used to adjust the difference between the dimension b s and b g diagonal matrix. Next, principal component analysis is performed on the obtained appearance feature vector b to further eliminate the correlation between the shape and the texture, thereby obtaining an active appearance model given by the following formula (11):
Figure PCTCN2015000940-appb-000033
Figure PCTCN2015000940-appb-000033
其中,
Figure PCTCN2015000940-appb-000034
是平均外观向量,Q是利用主成分分析得到的外观主成分特征向量所形成的变换矩阵,c是控制外观变化的外观模型参数。像这样,在给定外观模型参数c以及相应的相似变换矩阵(如缩放矩阵、旋转矩阵等)的情况下,可以合成一幅人眼图像。
among them,
Figure PCTCN2015000940-appb-000034
Is the average appearance vector, Q is the transformation matrix formed by the principal component eigenvectors obtained by principal component analysis, and c is the appearance model parameter that controls the appearance change. As such, in the case of a given appearance model parameter c and a corresponding similar transformation matrix (such as a scaling matrix, a rotation matrix, etc.), a human eye image can be synthesized.
在完成人眼的主动外观模型的建立过程之后,返回至图1,进入图1中所示出的步骤S102。在该步骤S102中,使用在步骤S101中得到的主动外观模型来匹配与上述N幅清晰图像I不同的要进行虹膜区域分割的新的人眼图像,以获得准确地呈现出该新的人眼图像中的人眼轮廓的一系列特征点。After completing the process of establishing the active appearance model of the human eye, returning to FIG. 1, the process proceeds to step S102 shown in FIG. In this step S102, the active appearance model obtained in step S101 is used to match a new human eye image to be subjected to iris region segmentation different from the above-described N sharp images I to obtain an accurate representation of the new human eye. A series of feature points of the human eye contour in the image.
下面以一个示例来说明如何实现主动外观模型与新的人眼图像的匹配。The following is an example to illustrate how to match the active appearance model to a new human eye image.
图10是根据本发明的一个实施例的主动外观模型与新的人眼图像的匹配步骤S102的实施流程的图。FIG. 10 is a diagram showing an implementation flow of a matching step S102 of an active appearance model and a new human eye image according to an embodiment of the present invention.
首先,对一幅要进行虹膜区域分割的新的人眼图像In计算相位一致性信息(步骤S1021)。其计算方法可以与图2中的步骤S1012所采用的方法相同。First, phase coincidence information is calculated for a new human eye image I n to be subjected to iris region segmentation (step S1021). The calculation method can be the same as that employed in step S1012 in FIG.
接着,利用在上述步骤S1021中所计算出的相位一致性信息来计算该人眼图像In根据当前形状s变形到均值形状
Figure PCTCN2015000940-appb-000035
而得到的纹理gs(步骤S1022)。
Then, using the phase coincidence information calculated in the above step S1021, the human eye image I n is calculated to be deformed to the mean shape according to the current shape s.
Figure PCTCN2015000940-appb-000035
The resulting texture g s (step S1022).
然后,不断改变在步骤S101中得到的主动外观模型中的外观模型参数c来对由下式(12)给出的目标函数进行优化,直至该主动外观模型的外观纹理与该人眼图像In的外观纹理相一致(步骤S1023):Then, the appearance model parameter c in the active appearance model obtained in step S101 is continuously changed to optimize the objective function given by the following formula (12) until the appearance texture of the active appearance model and the human eye image I n The appearance texture is consistent (step S1023):
Δ=||δg||2=||gs-gm||2  (12) Δ=||δ g || 2 =||g s -g m || 2 (12)
其中,gs是要进行虹膜区域分割的新的人眼图像In的纹理,gm是在步骤S101中得到的主动外观模型的纹理,而且,
Figure PCTCN2015000940-appb-000036
Where g s is the texture of the new human eye image I n to be subjected to the iris region segmentation, and g m is the texture of the active appearance model obtained in step S101, and
Figure PCTCN2015000940-appb-000036
对上式(12)所给出的目标函数的优化过程如下:The optimization process of the objective function given by the above formula (12) is as follows:
I.初始化迭代次数t和外观模型参数c,即,设t=0和c=0;I. Initialize the iteration number t and the appearance model parameter c, that is, set t=0 and c=0;
II.计算该人眼图像In的纹理与在步骤S101中得到的主动外观模型的纹理之差:δg=gs-gmII. Calculating the difference between the texture of the human eye image I n and the texture of the active appearance model obtained in step S101: δ g = g s - g m ;
III.按照c′=c-kδc来更新外观模型参数(其中,k是调节系数,此时k=1,δc是外观模型参数变化量),并且,在新的外观模型参数c′的情况下,计算该人眼图像In的纹理与主动外观模型的纹理之差δ′gIII. Update the appearance model parameters according to c'=c-kδ c (where k is the adjustment coefficient, k = 1 at this time, δ c is the variation of the appearance model parameters), and, in the new appearance model parameter c' In the case, the difference δ' g between the texture of the human eye image I n and the texture of the active appearance model is calculated;
IV.比较δg和δ′g。如果δ′g<δg,则将当前的外观模型参数值赋予c,即,使c=c′,并进入V;否则返回III,通过依次改变调节系数k(例如,令k=1.5、0.5、0.25)来继续调整主动外观模型;IV. Compare δ g and δ' g . If δ' g < δ g , the current appearance model parameter value is assigned to c, that is, c = c ', and enters V; otherwise, returns to III, by sequentially changing the adjustment coefficient k (for example, let k = 1.5, 0.5 , 0.25) to continue to adjust the active appearance model;
V.更新迭代次数t=t+1,判断该人眼图像In的纹理与主动外观模型的纹理之差δ′g是否小于阈值ξ,如果小于,则退出;否则转回III。如果迭代次数超过预定次数,则认为该幅图像中不包含人眼。V. Update the number of iterations t=t+1, and determine whether the difference δ′ g between the texture of the human eye image I n and the texture of the active appearance model is less than a threshold ξ, if it is less than, then exit; otherwise, return to III. If the number of iterations exceeds a predetermined number of times, the human eye is considered not to be included in the image.
在图11中示出了一幅要进行虹膜区域分割的人眼图像。利用上述这样的匹配步骤S102来将图11中所示出的人眼图像的纹理与先前建立的主动外观模型的纹理进行匹配,在该人眼图像上获得了一系列的特征点。根据图11可知,这些特征点与该人眼图像中的虹膜边界、瞳孔边界、上下眼皮边界非常精确地吻合,而且,这些特征点非常准确地呈现出该人眼图像的轮廓。A human eye image for which iris region segmentation is to be performed is shown in FIG. The texture of the human eye image shown in FIG. 11 is matched with the texture of the previously established active appearance model by the matching step S102 as described above, and a series of feature points are obtained on the human eye image. As can be seen from Fig. 11, these feature points coincide very precisely with the iris boundary, the pupil boundary, and the upper and lower eyelid boundaries in the human eye image, and these feature points very accurately represent the contour of the human eye image.
因此,对于任意一幅要进行虹膜区域分割的新的人眼图像In,只要像上述这样在匹配步骤S102中不断优化上式(12)所给出的目标函数直至要进行虹膜区域分割的新的人眼图像In的纹理与先前建立的主动外观模型的纹理之差小于预先确定的阈值,那么就能够获得与该人 眼图像In的各条边界几乎完全吻合的若干个特征点,不但能够保证整体的匹配准确度,还能够保证各个特征点处的匹配精度,进而,能够更加准确地呈现人眼轮廓,为后续的虹膜区域分割提供准确的信息。Therefore, for any new human eye image I n to be subjected to iris region segmentation, as long as the objective function given by the above formula (12) is continuously optimized in the matching step S102 as described above until the iris region segmentation is to be performed The difference between the texture of the human eye image I n and the texture of the previously established active appearance model is less than a predetermined threshold, so that a plurality of feature points that almost completely coincide with the boundaries of the human eye image I n can be obtained, not only It can ensure the overall matching accuracy, and can also ensure the matching precision at each feature point. Furthermore, the contour of the human eye can be presented more accurately, providing accurate information for subsequent iris region segmentation.
在对要进行虹膜区域分割的新的人眼图像匹配先前建立的主动外观模型来获得准确地呈现出该新的人眼图像中的人眼轮廓的一系列特征点之后,返回至图1,进入图1中所示出的步骤S103。在该步骤S103中,从通过上述步骤S102获得的上述一系列特征点中选择出若干个特征点,以使用最小二乘法来分别拟合虹膜边界、瞳孔边界及上下眼皮的边界。其拟合结果在图12中示出。以下将对用于拟合虹膜边界、瞳孔边界及上下眼皮的边界的特征点的选择方式进行说明。After matching the previously established active appearance model to the new human eye image to be subjected to the iris region segmentation to obtain a series of feature points that accurately represent the human eye contour in the new human eye image, return to FIG. 1 and enter Step S103 shown in FIG. In this step S103, a plurality of feature points are selected from the series of feature points obtained by the above-described step S102 to fit the iris boundary, the pupil boundary, and the boundary of the upper and lower eyelids, respectively, using the least squares method. The result of the fitting is shown in FIG. The selection of feature points for fitting the boundaries of the iris, the pupil boundary, and the upper and lower eyelids will be described below.
●虹膜边界的拟合●Iris boundary fitting
由于上下眼皮会对虹膜的上下边界形成遮挡,因此,在使用最小二乘法来拟合虹膜边界时,选择未被上下眼皮遮挡的虹膜左侧边界和虹膜右侧边界上的特征点。例如,可以选择图4中所示出的37至56这20个特征点来拟合虹膜边界。使用37至56这20个特征点来进行拟合而获得的虹膜边界如图12所示。当然,用于拟合虹膜边界的特征点的选择方式并不仅限于此,也可以只使用部分特征点来拟合虹膜边界,例如,使用图4中所示出的38、48、56这三个特征点或其它数量的特征点的组合来拟合虹膜边界。Since the upper and lower eyelids block the upper and lower boundaries of the iris, when the least squares method is used to fit the iris boundary, the feature points on the left side boundary of the iris and the right side boundary of the iris which are not blocked by the upper and lower eyelids are selected. For example, the 20 feature points 37 to 56 shown in FIG. 4 can be selected to fit the iris boundary. The iris boundary obtained by fitting using the 20 feature points of 37 to 56 is as shown in FIG. Of course, the selection method of the feature points for fitting the iris boundary is not limited thereto, and only a part of the feature points may be used to fit the iris boundary, for example, using the three 38, 48, 56 shown in FIG. A combination of feature points or other number of feature points to fit the iris boundary.
●瞳孔边界的拟合●Fitting the pupil boundary
由于瞳孔边界一般不会受到上下眼皮的影响,所以瞳孔边界上的全部特征点57至68均可用于拟合。使用57至68这12个特征点来进行拟合而获得的瞳孔边界如图12所示。当然,用于拟合瞳孔边界的特征点的选择方式并不仅限于此,也可以只使用部分特征点来拟合瞳孔边界,例如,使用图4中所示出的58、63、66这三个特征点或其它数量的特征点的组合来拟合瞳孔边界。Since the pupil boundaries are generally not affected by the upper and lower eyelids, all of the feature points 57 to 68 on the pupil boundary can be used for fitting. The pupil boundary obtained by fitting using the 12 feature points of 57 to 68 is as shown in FIG. Of course, the selection method of the feature points for fitting the pupil boundary is not limited thereto, and only the partial feature points may be used to fit the pupil boundary, for example, using the three numbers of 58, 58, and 66 shown in FIG. A combination of feature points or other number of feature points to fit the pupil boundary.
●上眼皮边界的拟合●Fitting of the upper eyelid boundary
从图11可以看出,在上眼皮的特征点中,靠近左右眼角处的特征 点并不适合拟合抛物线,因此,只使用23至32这10个特征点来拟合上眼皮。拟合出的上眼皮边界如图12所示。当然,用于拟合上眼皮边界的特征点的选择方式并不仅限于此,也可以使用比上述10个特征点更少数目的特征点来拟合上眼皮边界,例如,使用图4中所示出的25、28、30这三个特征点或其它数量的特征点的组合来拟合上眼皮边界。As can be seen from Fig. 11, among the feature points of the upper eyelid, the features near the left and right eye corners The point is not suitable for fitting the parabola, so only the 10 feature points of 23 to 32 are used to fit the upper eyelid. The fitted upper eyelid boundary is shown in Figure 12. Of course, the selection method of the feature points for fitting the upper eyelid boundary is not limited thereto, and a smaller number of feature points than the above 10 feature points may be used to fit the upper eyelid boundary, for example, using the method shown in FIG. A combination of three feature points 25, 28, 30 or other number of feature points to fit the upper eyelid boundary.
●下眼皮边界的拟合●Fitting of the lower eyelid boundary
从图11可以看出,在下眼皮的特征点中,靠近左右眼角处的特征点并不适合拟合抛物线,因此,只使用5至14这10个特征点来拟合下眼皮。拟合出的下眼皮边界如图12所示。当然,用于拟合下眼皮边界的特征点的选择方式并不仅限于此,也可以使用比上述10个特征点更少数目的特征点来拟合下眼皮边界,例如,使用图4中所示出的7、10、12这三个特征点或其它数量的特征点的组合来拟合下眼皮边界。As can be seen from Fig. 11, in the feature points of the lower eyelid, the feature points near the left and right eye corners are not suitable for fitting the parabola, and therefore, only the 10 feature points of 5 to 14 are used to fit the lower eyelid. The fitted lower eyelid boundary is shown in Figure 12. Of course, the selection method of the feature points for fitting the lower eyelid boundary is not limited thereto, and a lower number of feature points than the above 10 feature points may be used to fit the lower eyelid boundary, for example, as shown in FIG. The combination of the three feature points 7, 10, 12 or other number of feature points to fit the lower eyelid boundary.
在如以上那样对虹膜边界、瞳孔边界及上下眼皮的边界分别进行拟合之后,可以获得一个位于上眼皮以下、下眼皮以上、瞳孔边界以外、虹膜边界以内的公共区域,这个公共区域便是有效的虹膜区域(如图12所示),由此,完成了对虹膜区域的分割。After fitting the iris boundary, the pupil boundary, and the boundary of the upper and lower eyelids as described above, a common area located below the upper eyelid, above the lower eyelid, outside the pupil boundary, and within the iris boundary can be obtained. This public area is effective. The iris area (as shown in Figure 12), thereby completing the segmentation of the iris area.
根据本发明的方法,通过将人眼图像的相位一致性信息用于主动外观模型中的纹理模型的建立,从而能够获得瞳孔、虹膜、上下眼皮的纹理特征更加清晰的纹理信息,进而能够建立比现有技术更加准确的主动外观模型,此外,通过将人眼图像的相位一致性信息用于人眼图像与主动外观模型的匹配,从而能够十分准确地呈现人眼轮廓,进而能够实现比现有技术更准确的虹膜区域分割方法。According to the method of the present invention, by using the phase consistency information of the human eye image for the establishment of the texture model in the active appearance model, it is possible to obtain texture information with clearer texture features of the pupil, the iris, and the upper and lower eyelids, thereby establishing a ratio. The more accurate active appearance model of the prior art, in addition, by using the phase consistency information of the human eye image for the matching of the human eye image and the active appearance model, the contour of the human eye can be presented very accurately, and thus can be realized more than the existing one. More accurate iris segmentation method.
以下,将说明用于实现本发明的虹膜区域分割方法的装置。图13是根据本发明的一个实施例的虹膜区域分割装置1300的框图。Hereinafter, an apparatus for realizing the iris region dividing method of the present invention will be described. Figure 13 is a block diagram of an iris region segmentation device 1300, in accordance with one embodiment of the present invention.
如图13所示,该虹膜区域分割装置1300包括主动外观模型建立装置1301、主动外观模型匹配装置1302、以及边界拟合装置1303。As shown in FIG. 13, the iris region dividing device 1300 includes an active appearance model establishing device 1301, an active appearance model matching device 1302, and a boundary fitting device 1303.
上述主动外观模型建立装置1301是用于建立人眼图像的主动外观模型的装置,其包括:用于采集不同人的左右两只眼睛的清晰图像的 样本图像采集部1301a;用于在所采集的每幅清晰图像上人工地标定特征点的特征点标定部1301b;用于对每幅清晰图像计算人眼图像的相位一致性信息的样本图像相位一致性信息计算部1301c;用于将所采集的所有清晰图像中的对应特征点对齐的特征点对齐部1301d;用于利用对齐后的所有清晰图像中的特征点来建立构成主动外观模型的形状模型的人眼形状模型建立部1301e;用于利用计算出的相位一致性信息来建立构成主动外观模型的纹理模型的人眼纹理模型建立部1301f;以及用于将所建立的人眼形状模型和人眼纹理模型进行结合来获得人眼主动外观模型的合成部1301g。The active appearance model establishing device 1301 is a device for establishing an active appearance model of a human eye image, and includes: a clear image for collecting left and right eyes of different people. a sample image acquisition unit 1301a; a feature point calibration unit 1301 for manually calibrating feature points on each of the acquired sharp images; and a sample image phase for calculating phase consistency information of the human eye image for each clear image a sex information calculation unit 1301c; a feature point alignment unit 1301d for aligning corresponding feature points in all the acquired clear images; for constructing a shape model constituting the active appearance model by using feature points in all the clear images after alignment Human eye shape model establishing section 1301e; a human eye texture model establishing section 1301f for constructing a texture model constituting an active appearance model using the calculated phase consistency information; and a human eye shape model and a person for establishing the human eye shape The eye texture model is combined to obtain a synthesis portion 1301g of the human eye active appearance model.
进一步,上述人眼纹理模型建立部1301f包括:用于对均值形状和利用所计算出的所有清晰图像的相位一致性信息来对所有清晰图像进行标记而获得的全部样本形状分别进行Delaunay三角划分的形状划分单元(图中未示出);用于通过分片仿射变换来将所采集的所有清晰图像的相位一致性信息分别映射到这些清晰图像的均值形状的纹理归一化单元(图中未示出);以及用于利用主成分分析法处理所有归一化后的样本纹理信息来获得纹理参数、纹理模型的主成分分析处理单元(图中未示出)。Further, the human eye texture model establishing section 1301f includes: a Delaunay triangulation for all the sample shapes obtained by marking the mean shape and the phase consistency information using all the calculated clear images to mark all the clear images. a shape division unit (not shown); a texture normalization unit for mapping the phase consistency information of all the acquired clear images to the mean shape of the clear images by the slice affine transformation (in the figure) Not shown); and a principal component analysis processing unit (not shown) for processing all normalized sample texture information by principal component analysis to obtain texture parameters and texture models.
根据图13可知,上述主动外观模型建立装置1301被输入不同人的左右两只眼睛的人眼样本图像,通过其中的各部1301a~1301g的处理,生成人眼的主动外观模型,并向上述主动外观模型匹配装置1302输出。According to FIG. 13, the active appearance model establishing means 1301 inputs the human eye sample images of the left and right eyes of different people, and generates the active appearance model of the human eye through the processing of each of the parts 1301a to 1301g, and the active appearance is obtained. The model matching device 1302 outputs.
上述主动外观模型匹配装置1302是用于对要进行虹膜区域分割的新的人眼图像与从上述主动外观模型匹配装置1302输出的人眼的主动外观模型进行匹配的装置,其包括:用于对要进行虹膜区域分割的新的人眼图像计算相位一致性信息的输入图像相位一致性信息计算部1302a;用于利用计算出的相位一致性信息来计算上述要进行虹膜区域分割的新的人眼图像的纹理的输入图像纹理计算部1302b;以及用于将计算出的上述要进行虹膜区域分割的新的人眼图像的纹理与从上述主 动外观模型匹配装置1302输出的主动外观模型的纹理进行匹配的外观纹理匹配部1302c。The active appearance model matching device 1302 is a device for matching a new human eye image to be subjected to iris region segmentation with an active appearance model of the human eye outputted from the active appearance model matching device 1302, and includes: An input image phase coincidence information calculation unit 1302a that calculates phase consistency information for a new human eye image to perform iris region segmentation; for calculating a new human eye to be subjected to iris region segmentation using the calculated phase coincidence information An input image texture calculation unit 1302b of the texture of the image; and a texture for the new human eye image to be subjected to the division of the iris region to be calculated The texture of the active appearance model output by the dynamic appearance model matching means 1302 matches the appearance texture matching section 1302c.
根据图13可知,上述主动外观模型匹配装置1302被输入待分割虹膜区域的输入图像和主动外观模型,通过其中的各部1302a~1302c的处理,获得呈现该输入图像的轮廓的一系列特征点,并向上述边界拟合装置1303输出。According to FIG. 13, the active appearance model matching device 1302 is configured to input an input image and an active appearance model of the iris region to be segmented, and obtain a series of feature points for presenting the contour of the input image by processing the portions 1302a to 1302c therein. The output is performed to the above-described boundary fitting device 1303.
上述边界拟合装置1303是用于从上述一系列特征点中选择出若干个适当的特征点以使用最小二乘法来分别拟合虹膜边界、瞳孔边界及上下眼皮的边界的装置,其包括:用于拟合上述输入图像的虹膜边界的虹膜边界拟合部1303a;用于拟合上述输入图像的瞳孔边界的瞳孔边界拟合部1303b;用于拟合上述输入图像的上眼皮边界的上眼皮边界拟合部1303c;以及用于拟合上述输入图像的下眼皮边界的下眼皮边界拟合部1303d。The boundary fitting device 1303 is configured to select a plurality of suitable feature points from the series of feature points to respectively fit the iris boundary, the pupil boundary, and the boundary of the upper and lower eyelids by using a least square method, including: An iris boundary fitting portion 1303a for fitting an iris boundary of the input image; a pupil boundary fitting portion 1303b for fitting a pupil boundary of the input image; an upper eyelid boundary for fitting an upper eyelid boundary of the input image A fitting unit 1303c; and a lower eyelid boundary fitting unit 1303d for fitting a lower eyelid boundary of the input image.
根据图13可知,上述边界拟合装置1303被输入通过匹配而获得的一系列特征点,通过其中的各部1303a~1303d的处理,完成对输入图像的虹膜区域的分割,获得有效的虹膜区域。As can be seen from Fig. 13, the boundary fitting device 1303 receives a series of feature points obtained by matching, and through the processing of the respective portions 1303a to 1303d, the division of the iris region of the input image is completed, and an effective iris region is obtained.
需要注意的是,图13中所示出的虹膜区域分割装置1300只不过是一个示例,可以对其进行各种变形来实现相同的效果。作为一种变形,在图14中示出了根据本发明的替换实施例的虹膜区域分割装置1400的框图。It is to be noted that the iris area dividing device 1300 shown in FIG. 13 is merely an example, and various modifications can be made thereto to achieve the same effect. As a variant, a block diagram of an iris region segmentation device 1400 in accordance with an alternate embodiment of the present invention is shown in FIG.
该虹膜区域分割装置1400与图13中所示出的虹膜区域分割装置1300的不同之处仅在于,使用主动外观模型建立装置1401来代替图13中所示出的主动外观模型建立装置1301,进一步地,在该主动外观模型建立装置1401中包括人眼纹理模型建立部1401f来代替图13中所示出的人眼纹理模型建立部1301f。The iris area dividing device 1400 is different from the iris area dividing device 1300 shown in FIG. 13 only in that an active appearance model establishing means 1401 is used instead of the active appearance model establishing means 1301 shown in FIG. 13, further In place of the human eye texture model establishing unit 1301f shown in FIG. 13, the human eye texture model establishing unit 1401f is included in the active appearance model establishing means 1401.
具体地,上述人眼纹理模型建立部1401f包括:用于利用基于对应点的图像配准算法来将所采集的所有清晰图像的相位一致性信息分别映射到这些清晰图像的均值形状的纹理归一化单元(图中未示出); 以及用于利用主成分分析法处理所有归一化后的样本纹理信息来获得纹理参数、纹理模型的主成分分析处理单元(图中未示出)。Specifically, the human eye texture model establishing unit 1401f includes: texture normalization for mapping the phase consistency information of all the acquired clear images to the mean shape of the sharp images by using the image registration algorithm based on the corresponding points. Unit (not shown); And a principal component analysis processing unit (not shown) for processing all normalized sample texture information by principal component analysis to obtain a texture parameter and a texture model.
除了上述这些区别之外,图14中示出的虹膜区域分割装置1400与图13中示出的虹膜区域分割装置1300相同。In addition to the above differences, the iris area dividing device 1400 shown in Fig. 14 is the same as the iris area dividing device 1300 shown in Fig. 13.
根据本发明的装置,通过在主动外观模型建立装置中包括对每幅清晰图像计算人眼图像的相位一致性信息的样本图像相位一致性信息计算部以及利用计算出的相位一致性信息来建立构成主动外观模型的纹理模型的人眼纹理模型建立部,从而能够获得瞳孔、虹膜、上下眼皮的纹理特征更加清晰的纹理信息,进而能够建立比现有技术更加准确的主动外观模型,此外,通过在主动外观模型匹配装置中包括对要进行虹膜区域分割的人眼图像计算相位一致性信息的输入图像相位一致性信息计算部和将根据计算出的相位一致性信息而获得的人眼图像的纹理与主动外观模型的纹理进行匹配的外观纹理匹配部,从而能够十分准确地呈现人眼轮廓,进而能够实现比现有技术更准确的虹膜区域分割装置。According to the apparatus of the present invention, the sample image phase coincidence information calculating section that calculates the phase coincidence information of the human eye image for each clear image is included in the active appearance model establishing means, and the calculated phase consistency information is used to establish the composition. The human eye texture model establishing portion of the texture model of the active appearance model, thereby obtaining texture information with clearer texture features of the pupil, the iris, and the upper and lower eyelids, thereby being able to establish a more accurate active appearance model than the prior art, and further, The active appearance model matching device includes an input image phase consistency information calculation portion that calculates phase consistency information for a human eye image to be subjected to iris region segmentation, and a texture and a texture of a human eye image obtained based on the calculated phase coincidence information The texture of the active appearance model is matched to the appearance texture matching portion, so that the contour of the human eye can be presented very accurately, thereby enabling an iris region segmentation device that is more accurate than the prior art.
需要注意的是,本发明可在软件和/或软件与硬件的组合体中被实施,例如,本发明的各个装置可采用专用集成电路(ASIC)或任何其它类似硬件设备来实现。在一个实施例中,本发明的软件程序可以通过处理器执行以实现上文所述的步骤或功能。同样地,本发明的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器、磁或光驱动器或软磁盘及类似设备。另外,本发明的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。It should be noted that the present invention can be implemented in software and/or a combination of software and hardware. For example, the various devices of the present invention can be implemented using an application specific integrated circuit (ASIC) or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including related data structures) of the present invention can be stored in a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like. Additionally, some of the steps or functions of the present invention may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其它的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落入权利要求的等同要件的含义和 范围内的所有变化涵括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其它单元或步骤,单数不排除复数。装置权利要求中陈述的多个装置或部或单元也可以由一个装置或部或单元通过软件或者硬件来实现。第一、第二等词语用来表示名称,而并不表示任何特定的顺序。 It is obvious to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, and the present invention can be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. Therefore, the present embodiments are to be considered as illustrative and not restrictive, the scope of the invention is defined by the appended claims Meaning of the equivalent element and All variations within the scope are encompassed within the invention. Any reference signs in the claims should not be construed as limiting the claim. In addition, it is to be understood that the word "comprising" does not exclude other elements or steps. The plurality of devices or units or units recited in the device claims may also be implemented by one device or unit or unit by software or hardware. The words first, second, etc. are used to denote names and do not denote any particular order.

Claims (24)

  1. 一种虹膜区域分割方法,其特征在于,所述虹膜区域分割方法是基于主动外观模型的虹膜区域分割方法,所述方法包括:An iris region segmentation method is characterized in that the iris region segmentation method is an iris region segmentation method based on an active appearance model, and the method includes:
    主动外观模型建立步骤,利用预先采集的多幅人眼样本图像来建立由人眼形状模型和人眼纹理模型构成的主动外观模型;The active appearance model establishing step uses a plurality of pre-acquired human eye sample images to establish an active appearance model composed of a human eye shape model and a human eye texture model;
    主动外观模型匹配步骤,对要进行虹膜区域分割的输入人眼图像和在所述主动外观模型建立步骤中建立的所述主动外观模型进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征点;以及An active appearance model matching step of matching an input human eye image to be subjected to iris region segmentation and the active appearance model established in the active appearance model establishing step to obtain a human eye in the input human eye image Multiple feature points of the outline;
    边界拟合步骤,从在所述主动外观模型匹配步骤中获得的多个特征点中选择用于拟合所述输入人眼图像中的各条边界的特征点来进行拟合,以获得分割出的虹膜区域,a boundary fitting step of selecting a feature point for fitting each boundary in the input human eye image from a plurality of feature points obtained in the active appearance model matching step to perform fitting to obtain a segmentation Iris area,
    其中,在所述主动外观模型建立步骤和所述主动外观模型匹配步骤中均利用了相位一致性信息。The phase consistency information is utilized in both the active appearance model establishing step and the active appearance model matching step.
  2. 根据权利要求1所述的虹膜区域分割方法,其特征在于,所述主动外观模型建立步骤包括:The iris region segmentation method according to claim 1, wherein the active appearance model establishing step comprises:
    样本图像相位一致性信息计算步骤,对所述预先采集的多幅人眼样本图像中的每一幅计算相位一致性信息;以及a sample image phase consistency information calculating step of calculating phase consistency information for each of the plurality of pre-acquired human eye sample images;
    人眼纹理模型建立步骤,利用计算出的相位一致性信息来建立构成所述主动外观模型的人眼纹理模型。The human eye texture model establishing step uses the calculated phase consistency information to establish a human eye texture model constituting the active appearance model.
  3. 根据权利要求2所述的虹膜区域分割方法,其特征在于,所述人眼纹理模型建立步骤包括:The iris region segmentation method according to claim 2, wherein the human eye texture model establishing step comprises:
    形状划分步骤,对所述预先采集的多幅人眼样本图像的均值形状和利用所述计算出的相位一致性信息来对所述多幅人眼样本图像进行标记而获得的多个样本形状分别进行Delaunay三角划分;a shape dividing step of respectively determining a mean shape of the plurality of pre-acquired human eye sample images and a plurality of sample shapes obtained by marking the plurality of human eye sample images by using the calculated phase consistency information Perform Delaunay triangulation;
    纹理归一化步骤,通过分片仿射变换来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及 a texture normalization step of respectively mapping the calculated phase consistency information to the mean shape by a slice affine transformation to obtain normalized sample texture information;
    主成分分析处理步骤,利用主成分分析法处理所述样本纹理信息,以获得纹理参数、纹理模型。The principal component analysis processing step processes the sample texture information by using a principal component analysis method to obtain a texture parameter and a texture model.
  4. 根据权利要求2所述的虹膜区域分割方法,其特征在于,所述人眼纹理模型建立步骤包括:The iris region segmentation method according to claim 2, wherein the human eye texture model establishing step comprises:
    纹理归一化步骤,利用基于对应点的图像配准算法来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及a texture normalization step of mapping the calculated phase consistency information to the mean shape using an image registration algorithm based on a corresponding point to obtain normalized sample texture information;
    主成分分析处理步骤,利用主成分分析法处理所述样本纹理信息,以获得纹理参数、纹理模型。The principal component analysis processing step processes the sample texture information by using a principal component analysis method to obtain a texture parameter and a texture model.
  5. 根据权利要求1所述的虹膜区域分割方法,其特征在于,所述主动外观模型匹配步骤包括:The iris region segmentation method according to claim 1, wherein the active appearance model matching step comprises:
    输入图像相位一致性信息计算步骤,对所述要进行虹膜区域分割的输入人眼图像计算相位一致性信息;Inputting an image phase consistency information calculating step of calculating phase consistency information for the input human eye image to be subjected to iris region segmentation;
    输入图像纹理计算步骤,利用计算出的输入人眼图像的相位一致性信息来计算所述输入人眼图像的纹理;以及Inputting an image texture calculating step of calculating a texture of the input human eye image using the calculated phase consistency information of the input human eye image;
    外观纹理匹配步骤,将计算出的所述输入人眼图像的纹理与在所述主动外观模型建立步骤中建立的所述主动外观模型的纹理进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征点。An appearance texture matching step of matching the calculated texture of the input human eye image with the texture of the active appearance model established in the active appearance model establishing step to obtain a representation in the input human eye image Multiple feature points of the contour of the human eye.
  6. 根据权利要求1所述的虹膜区域分割方法,其特征在于,所述主动外观模型建立步骤还包括:The iris region segmentation method according to claim 1, wherein the active appearance model establishing step further comprises:
    样本图像采集步骤,预先采集不同人的左右两只眼睛的所述多幅人眼样本图像;a sample image collecting step of pre-collecting the plurality of human eye sample images of the left and right eyes of different people;
    特征点标定步骤,在所述多幅人眼样本图像上人工地标定特征点;a feature point calibration step of manually calibrating feature points on the plurality of human eye sample images;
    特征点对齐步骤,将所述多幅人眼样本图像中的对应特征点对齐;a feature point alignment step of aligning corresponding feature points in the plurality of human eye sample images;
    人眼形状模型建立步骤,利用在所述特征点对齐步骤中进行对齐后的所述多幅人眼样本图像中的特征点来建立构成所述主动外观模型的人眼形状模型;以及a human eye shape model establishing step of establishing a human eye shape model constituting the active appearance model by using feature points in the plurality of human eye sample images aligned in the feature point alignment step;
    合成步骤,将所建立的所述人眼形状模型和所述人眼纹理模型进 行结合来获得所述主动外观模型。a step of synthesizing the established human eye shape model and the human eye texture model The rows are combined to obtain the active appearance model.
  7. 根据权利要求6所述的虹膜区域分割方法,其特征在于,在所述特征点对齐步骤中,采用普氏分析来得到去除平移、尺度和旋转的对齐图像。The iris region segmentation method according to claim 6, wherein in the feature point alignment step, Platts analysis is used to obtain an aligned image in which translation, scale, and rotation are removed.
  8. 根据权利要求6所述的虹膜区域分割方法,其特征在于,在所述人眼形状模型建立步骤和所述人眼纹理模型建立步骤中,利用主成分分析来得到所述人眼形状模型和所述人眼纹理模型。The iris region segmentation method according to claim 6, wherein in the human eye shape model establishing step and the human eye texture model establishing step, the human eye shape model and the The human eye texture model.
  9. 根据权利要求1所述的虹膜区域分割方法,其特征在于,所述各条边界包括虹膜边界、瞳孔边界、上下眼皮边界。The iris region segmentation method according to claim 1, wherein each of the boundary boundaries comprises an iris boundary, a pupil boundary, and an upper and lower eyelid boundary.
  10. 根据权利要求9所述的虹膜区域分割方法,其特征在于,在所述边界拟合步骤中拟合所述虹膜边界时,从在所述主动外观模型匹配步骤中获得的多个特征点中选择位于虹膜左侧边界和虹膜右侧边界上的至少一部分特征点来进行拟合。The iris region segmentation method according to claim 9, wherein when the iris boundary is fitted in the boundary fitting step, a plurality of feature points obtained in the active appearance model matching step are selected At least a portion of the feature points located on the left side border of the iris and the right side border of the iris are fitted.
  11. 根据权利要求9所述的虹膜区域分割方法,其特征在于,在所述边界拟合步骤中拟合所述瞳孔边界时,从在所述主动外观模型匹配步骤中获得的多个特征点中选择位于瞳孔边界上的至少一部分特征点来进行拟合。The iris region segmentation method according to claim 9, wherein when the pupil boundary is fitted in the boundary fitting step, a plurality of feature points obtained in the active appearance model matching step are selected At least a portion of the feature points located on the pupil boundary are fitted.
  12. 根据权利要求9所述的虹膜区域分割方法,其特征在于,在所述边界拟合步骤中拟合所述上下眼皮边界时,从在所述主动外观模型匹配步骤中获得的多个特征点中选择位于上下眼皮边界上的与眼角隔开一定距离的中间部分处的至少一部分特征点来进行拟合。The iris region segmentation method according to claim 9, wherein when the upper and lower eyelid boundaries are fitted in the boundary fitting step, among a plurality of feature points obtained in the active appearance model matching step Fitting is performed by selecting at least a portion of the feature points at the intermediate portion of the upper and lower eyelid boundaries that are spaced apart from the corner of the eye.
  13. 一种虹膜区域分割装置,其特征在于,所述虹膜区域分割装置是基于主动外观模型的虹膜区域分割装置,所述装置包括:An iris region segmentation device, wherein the iris region segmentation device is an iris region segmentation device based on an active appearance model, the device comprising:
    主动外观模型建立装置,其被配置为利用预先采集的多幅人眼样本图像来建立由人眼形状模型和人眼纹理模型构成的主动外观模型;An active appearance model establishing device configured to use an image of a plurality of human eye samples collected in advance to establish an active appearance model composed of a human eye shape model and a human eye texture model;
    主动外观模型匹配装置,其被配置为对要进行虹膜区域分割的输入人眼图像和在所述主动外观模型建立装置中建立的所述主动外观模型进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征 点;以及An active appearance model matching device configured to match an input human eye image to be subjected to iris region segmentation and the active appearance model established in the active appearance model building device to obtain the input human eye image Multiple features of the human eye contour Point;
    边界拟合装置,其被配置为从在所述主动外观模型匹配装置中获得的多个特征点中选择用于拟合所述输入人眼图像中的各条边界的特征点来进行拟合,以获得分割出的虹膜区域,a boundary fitting device configured to select, from a plurality of feature points obtained in the active appearance model matching device, feature points for fitting respective boundaries in the input human eye image to perform fitting, To obtain the segmented iris area,
    其中,在所述主动外观模型建立装置和所述主动外观模型匹配装置中均利用了相位一致性信息。Wherein, the phase consistency information is utilized in both the active appearance model establishing device and the active appearance model matching device.
  14. 根据权利要求13所述的虹膜区域分割装置,其特征在于,所述主动外观模型建立装置包括:The iris region segmentation device according to claim 13, wherein the active appearance model establishing device comprises:
    样本图像相位一致性信息计算部,其被配置为对所述预先采集的多幅人眼样本图像中的每一幅计算相位一致性信息;以及a sample image phase consistency information calculation section configured to calculate phase consistency information for each of the plurality of pre-acquired human eye sample images;
    人眼纹理模型建立部,其被配置为利用计算出的相位一致性信息来建立构成所述主动外观模型的人眼纹理模型。A human eye texture model establishing section configured to use the calculated phase consistency information to establish a human eye texture model constituting the active appearance model.
  15. 根据权利要求14所述的虹膜区域分割装置,其特征在于,所述人眼纹理模型建立部包括:The iris region segmentation device according to claim 14, wherein the human eye texture model establishing portion comprises:
    形状划分单元,其被配置为对所述预先采集的多幅人眼样本图像的均值形状和利用所述计算出的相位一致性信息来对所述多幅人眼样本图像进行标记而获得的多个样本形状分别进行Delaunay三角划分;a shape dividing unit configured to obtain a mean shape of the plurality of pre-acquired human eye sample images and to mark the plurality of human eye sample images by using the calculated phase consistency information The sample shapes are respectively subjected to Delaunay triangulation;
    纹理归一化单元,其被配置为通过分片仿射变换来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及a texture normalization unit configured to map the calculated phase consistency information to the mean shape by a slice affine transformation to obtain normalized sample texture information;
    主成分分析处理单元,其被配置为利用主成分分析法处理所述样本纹理信息,以获得纹理参数、纹理模型。A principal component analysis processing unit configured to process the sample texture information using principal component analysis to obtain a texture parameter, a texture model.
  16. 根据权利要求14所述的虹膜区域分割装置,其特征在于,所述人眼纹理模型建立部包括:The iris region segmentation device according to claim 14, wherein the human eye texture model establishing portion comprises:
    纹理归一化单元,其被配置为利用基于对应点的图像配准算法来将所述计算出的相位一致性信息分别映射到所述均值形状,以获得归一化后的样本纹理信息;以及a texture normalization unit configured to map the calculated phase consistency information to the mean shape using a corresponding point-based image registration algorithm to obtain normalized sample texture information;
    主成分分析处理单元,其被配置为利用主成分分析法处理所述样 本纹理信息,以获得纹理参数、纹理模型。a principal component analysis processing unit configured to process the sample using principal component analysis This texture information is used to obtain texture parameters and texture models.
  17. 根据权利要求13所述的虹膜区域分割装置,其特征在于,所述主动外观模型匹配装置包括:The iris region segmentation device according to claim 13, wherein the active appearance model matching device comprises:
    输入图像相位一致性信息计算部,其被配置为对所述要进行虹膜区域分割的输入人眼图像计算相位一致性信息;An input image phase coincidence information calculation section configured to calculate phase consistency information for the input human eye image to be subjected to iris region segmentation;
    输入图像纹理计算部,其被配置为利用计算出的输入人眼图像的相位一致性信息来计算所述输入人眼图像的纹理;以及An input image texture calculation section configured to calculate a texture of the input human eye image using the calculated phase consistency information of the input human eye image;
    外观纹理匹配部,其被配置为将计算出的所述输入人眼图像的纹理与在所述主动外观模型建立装置中建立的所述主动外观模型的纹理进行匹配,以获得呈现所述输入人眼图像中的人眼轮廓的多个特征点。An appearance texture matching portion configured to match the calculated texture of the input human eye image with the texture of the active appearance model established in the active appearance model building device to obtain the input person Multiple feature points of the human eye contour in the eye image.
  18. 根据权利要求13所述的虹膜区域分割装置,其特征在于,所述主动外观模型建立装置还包括:The iris region segmentation device according to claim 13, wherein the active appearance model establishing device further comprises:
    样本图像采集部,其被配置为预先采集不同人的左右两只眼睛的所述多幅人眼样本图像;a sample image acquisition unit configured to pre-collect the plurality of human eye sample images of the left and right eyes of different people;
    特征点标定部,其被配置为在所述多幅人眼样本图像上人工地标定特征点;a feature point calibration portion configured to manually calibrate feature points on the plurality of human eye sample images;
    特征点对齐部,其被配置为将所述多幅人眼样本图像中的对应特征点对齐;a feature point alignment portion configured to align corresponding feature points in the plurality of human eye sample images;
    人眼形状模型建立部,其被配置为利用在所述特征点对齐部中进行对齐后的所述多幅人眼样本图像中的特征点来建立构成所述主动外观模型的人眼形状模型;以及a human eye shape model establishing portion configured to establish a human eye shape model constituting the active appearance model by using feature points in the plurality of human eye sample images aligned in the feature point alignment portion; as well as
    合成部,其被配置为将所建立的所述人眼形状模型和所述人眼纹理模型进行结合来获得所述主动外观模型。A synthesis section configured to combine the established human eye shape model and the human eye texture model to obtain the active appearance model.
  19. 根据权利要求18所述的虹膜区域分割装置,其特征在于,所述特征点对齐部被配置为采用普氏分析来得到去除平移、尺度和旋转的对齐图像。The iris region segmentation device according to claim 18, wherein the feature point alignment portion is configured to perform a Platts analysis to obtain an aligned image in which translation, scale, and rotation are removed.
  20. 根据权利要求18所述的虹膜区域分割装置,其特征在于,所述人眼形状模型建立部和所述人眼纹理模型建立部被配置为利用主成 分分析来得到所述人眼形状模型和所述人眼纹理模型。The iris region dividing device according to claim 18, wherein the human eye shape model establishing portion and the human eye texture model establishing portion are configured to utilize a main assembly The human eye shape model and the human eye texture model are obtained by sub-analysis.
  21. 根据权利要求13所述的虹膜区域分割装置,其特征在于,所述各条边界包括虹膜边界、瞳孔边界、上下眼皮边界。The iris region segmentation device according to claim 13, wherein each of the boundary boundaries includes an iris boundary, a pupil boundary, and an upper and lower eyelid boundary.
  22. 根据权利要求21所述的虹膜区域分割装置,其特征在于,在所述边界拟合装置中拟合所述虹膜边界时,从通过所述主动外观模型匹配装置获得的多个特征点中选择位于虹膜左侧边界和虹膜右侧边界上的至少一部分特征点来进行拟合。The iris region dividing device according to claim 21, wherein when the iris boundary is fitted in the boundary fitting device, a plurality of feature points obtained by the active appearance model matching device are selected to be located At least a portion of the feature points on the left side boundary of the iris and the right side boundary of the iris are fitted.
  23. 根据权利要求21所述的虹膜区域分割装置,其特征在于,在所述边界拟合装置中拟合所述瞳孔边界时,从通过所述主动外观模型匹配装置获得的多个特征点中选择位于瞳孔边界上的至少一部分特征点来进行拟合。The iris region dividing device according to claim 21, wherein when the pupil boundary is fitted in the boundary fitting device, a plurality of feature points obtained by the active appearance model matching device are selected to be located At least a portion of the feature points on the pupil boundary are fitted.
  24. 根据权利要求21所述的虹膜区域分割装置,其特征在于,在所述边界拟合装置中拟合所述上下眼皮边界时,从通过所述主动外观模型匹配装置获得的多个特征点中选择位于上下眼皮边界上的与眼角隔开一定距离的中间部分处的至少一部分特征点来进行拟合。 The iris region dividing device according to claim 21, wherein when the upper and lower eyelid boundaries are fitted in the boundary fitting device, a plurality of feature points obtained by the active appearance model matching device are selected At least a portion of the feature points located at an intermediate portion of the upper and lower eyelid boundaries at a distance from the corner of the eye are fitted.
PCT/CN2015/000940 2015-12-30 2015-12-30 Iris region segmentation method and device based on active appearance model WO2017113039A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/000940 WO2017113039A1 (en) 2015-12-30 2015-12-30 Iris region segmentation method and device based on active appearance model
CN201580085642.9A CN109074471B (en) 2015-12-30 2015-12-30 Iris region segmentation method and device based on active appearance model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/000940 WO2017113039A1 (en) 2015-12-30 2015-12-30 Iris region segmentation method and device based on active appearance model

Publications (1)

Publication Number Publication Date
WO2017113039A1 true WO2017113039A1 (en) 2017-07-06

Family

ID=59224079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/000940 WO2017113039A1 (en) 2015-12-30 2015-12-30 Iris region segmentation method and device based on active appearance model

Country Status (2)

Country Link
CN (1) CN109074471B (en)
WO (1) WO2017113039A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859219A (en) * 2019-02-26 2019-06-07 江西理工大学 In conjunction with the high score Remote Sensing Image Segmentation of phase and spectrum
CN112906431A (en) * 2019-11-19 2021-06-04 北京眼神智能科技有限公司 Iris image segmentation method and device, electronic equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560539A (en) * 2019-09-10 2021-03-26 中国电子技术标准化研究院 Resolution testing method, device and system for iris acquisition equipment
CN112651389B (en) * 2021-01-20 2023-11-14 北京中科虹霸科技有限公司 Correction model training, correction and recognition method and device for non-emmetropic iris image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794263A (en) * 2005-12-29 2006-06-28 上海交通大学 Partition discriminating method of human iris vein
CN101539991A (en) * 2008-03-20 2009-09-23 中国科学院自动化研究所 Effective image-region detection and segmentation method for iris recognition
CN104463159A (en) * 2014-12-31 2015-03-25 北京释码大华科技有限公司 Image processing method and device of iris positioning
CN104680128A (en) * 2014-12-31 2015-06-03 北京释码大华科技有限公司 Four-dimensional analysis-based biological feature recognition method and four-dimensional analysis-based biological feature recognition system
US9189686B2 (en) * 2013-12-23 2015-11-17 King Fahd University Of Petroleum And Minerals Apparatus and method for iris image analysis
CN105069428A (en) * 2015-07-29 2015-11-18 天津市协力自动化工程有限公司 Multi-template iris identification method based on similarity principle and multi-template iris identification device based on similarity principle
CN105160306A (en) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 Iris image blurring determination method and device
CN105184269A (en) * 2015-09-15 2015-12-23 成都通甲优博科技有限责任公司 Extraction method and extraction system of iris image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1092372C (en) * 1997-05-30 2002-10-09 王介生 Iris recoganizing method
US7756301B2 (en) * 2005-01-26 2010-07-13 Honeywell International Inc. Iris recognition system and method
US8682073B2 (en) * 2011-04-28 2014-03-25 Sri International Method of pupil segmentation
CN102306289A (en) * 2011-09-16 2012-01-04 兰州大学 Method for extracting iris features based on pulse couple neural network (PCNN)
CN107895157B (en) * 2017-12-01 2020-10-27 沈海斌 Method for accurately positioning iris center of low-resolution image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794263A (en) * 2005-12-29 2006-06-28 上海交通大学 Partition discriminating method of human iris vein
CN101539991A (en) * 2008-03-20 2009-09-23 中国科学院自动化研究所 Effective image-region detection and segmentation method for iris recognition
US9189686B2 (en) * 2013-12-23 2015-11-17 King Fahd University Of Petroleum And Minerals Apparatus and method for iris image analysis
CN104463159A (en) * 2014-12-31 2015-03-25 北京释码大华科技有限公司 Image processing method and device of iris positioning
CN104680128A (en) * 2014-12-31 2015-06-03 北京释码大华科技有限公司 Four-dimensional analysis-based biological feature recognition method and four-dimensional analysis-based biological feature recognition system
CN105069428A (en) * 2015-07-29 2015-11-18 天津市协力自动化工程有限公司 Multi-template iris identification method based on similarity principle and multi-template iris identification device based on similarity principle
CN105160306A (en) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 Iris image blurring determination method and device
CN105184269A (en) * 2015-09-15 2015-12-23 成都通甲优博科技有限责任公司 Extraction method and extraction system of iris image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859219A (en) * 2019-02-26 2019-06-07 江西理工大学 In conjunction with the high score Remote Sensing Image Segmentation of phase and spectrum
CN112906431A (en) * 2019-11-19 2021-06-04 北京眼神智能科技有限公司 Iris image segmentation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109074471A (en) 2018-12-21
CN109074471B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
Bogo et al. FAUST: Dataset and evaluation for 3D mesh registration
CN101819628B (en) Method for performing face recognition by combining rarefaction of shape characteristic
Maes et al. Feature detection on 3D face surfaces for pose normalisation and recognition
WO2017049994A1 (en) Hyperspectral image corner detection method and system
WO2017059591A1 (en) Finger vein identification method and device
JP4780198B2 (en) Authentication system and authentication method
WO2017113039A1 (en) Iris region segmentation method and device based on active appearance model
CN109118528A (en) Singular value decomposition image matching algorithm based on area dividing
CN106778468A (en) 3D face identification methods and equipment
US10395112B2 (en) Device and method of recognizing iris
Yang et al. Prostate segmentation in MR images using discriminant boundary features
JP2008204200A (en) Face analysis system and program
CN104036299B (en) A kind of human eye contour tracing method based on local grain AAM
JP5018029B2 (en) Authentication system and authentication method
Bastias et al. A method for 3D iris reconstruction from multiple 2D near-infrared images
JP4814666B2 (en) Face analysis system
Jahanbin et al. Passive three dimensional face recognition using iso-geodesic contours and procrustes analysis
Gonzalez et al. Normalization and feature extraction on ear images
Bogo et al. Automated detection of new or evolving melanocytic lesions using a 3D body model
WO2020181465A1 (en) Device and method for contactless fingerprint acquisition
Sun et al. Optic disc segmentation by balloon snake with texture from color fundus image
WO2006061365A1 (en) Face recognition using features along iso-radius contours
Quan et al. Statistical shape modelling for expression-invariant face analysis and recognition
Amayeh et al. A comparative study of hand recognition systems
Umer et al. A fast and robust method for iris localization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15911671

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15911671

Country of ref document: EP

Kind code of ref document: A1