CN112258569A - Pupil center positioning method, device, equipment and computer storage medium - Google Patents

Pupil center positioning method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN112258569A
CN112258569A CN202010993486.9A CN202010993486A CN112258569A CN 112258569 A CN112258569 A CN 112258569A CN 202010993486 A CN202010993486 A CN 202010993486A CN 112258569 A CN112258569 A CN 112258569A
Authority
CN
China
Prior art keywords
pupil
eye image
target eye
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010993486.9A
Other languages
Chinese (zh)
Other versions
CN112258569B (en
Inventor
季渊
赵浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Tanggu Semiconductor Co ltd
Original Assignee
Suzhou Tanggu Photoelectric Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Tanggu Photoelectric Technology Co Ltd filed Critical Suzhou Tanggu Photoelectric Technology Co Ltd
Priority to CN202010993486.9A priority Critical patent/CN112258569B/en
Publication of CN112258569A publication Critical patent/CN112258569A/en
Application granted granted Critical
Publication of CN112258569B publication Critical patent/CN112258569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a pupil center positioning method, a pupil center positioning device, pupil center positioning equipment and a computer storage medium, wherein the method comprises the following steps: acquiring a target eye image; determining a contour of a pupil in the target eye image; determining a circumscribed figure of the outline, and acquiring coordinates of a tangent point of the outline and the circumscribed figure in the target eye image; and determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point. The pupil center positioning method, the pupil center positioning device, the pupil center positioning equipment and the computer storage medium have the advantages of being small in calculation amount and capable of achieving rapid pupil center positioning.

Description

Pupil center positioning method, device, equipment and computer storage medium
Technical Field
The present application belongs to the field of image positioning technology, and in particular, to a pupil center positioning method, apparatus, device, and computer storage medium.
Background
With the development of technology, the role played by pupil center positioning in various fields is more and more remarkable. For example, in the field of eye tracking, the direction and the position of the point of the eye can be estimated by capturing the position of the pupil center. For example, in the field of iris recognition, the iris area can be conveniently extracted by positioning the center of the pupil, and then the characteristics such as the texture and the like on the extracted iris area are recognized.
In order to realize pupil center positioning, the existing pupil center positioning method usually needs to utilize a mathematical fitting equation and/or a large amount of mathematical operations to calculate the position of the pupil center point, which is not only huge in calculation amount, but also slow in positioning speed.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for positioning pupil center and a computer storage medium, which can solve the technical problems of huge calculation amount and slow positioning speed in the pupil center positioning process.
In a first aspect, an embodiment of the present application provides a pupil center positioning method, including:
acquiring a target eye image;
determining a contour of a pupil in the target eye image;
determining a circumscribed figure of the outline, and acquiring coordinates of a tangent point of the outline and the circumscribed figure in the target eye image;
and determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point.
In one embodiment, the determining the contour of the pupil in the target eye image specifically includes:
carrying out image binarization processing on the target eye image according to a preset threshold value to obtain a binarized image comprising a pupil;
screening pixel points in the binary image to obtain target pixel points on the edge of the pupil;
and determining the outline according to the target pixel point.
In an embodiment, before the image binarization processing is performed on the target eye image according to a preset threshold value to obtain a binarized image including a pupil, the method further includes:
acquiring the occurrence frequency of each level of gray value in a preset gray value range in the target eye image;
constructing a gray level histogram of the corresponding relation between each level of gray level value and the occurrence frequency;
and taking the gray value corresponding to the minimum value between the first maximum value and the second maximum value of the occurrence frequency in the gray histogram as the preset threshold value.
In an embodiment, the screening the pixel points in the binarized image to obtain the target pixel points on the edge of the pupil specifically includes:
performing plane convolution operation on each pixel point in the binary image by using a transverse convolution factor of the Sobel convolution factor and a longitudinal convolution factor of the Sobel convolution factor to obtain a gradient amplitude value of each pixel point in the binary image;
carrying out non-maximum suppression processing on the gradient amplitude;
extracting first pixel points which meet preset conditions in the binarized image as the target pixel points, wherein the preset conditions comprise:
the gradient amplitude of the first pixel point is larger than a preset first threshold value;
under the condition that the gradient amplitude of the first pixel point is smaller than or equal to the preset first threshold and larger than a preset second threshold, pixel points with gradient amplitudes larger than the preset first threshold exist in eight adjacent areas of the first pixel point.
In one embodiment, prior to said determining the contour of the pupil in the target eye image, the method further comprises:
preprocessing the target eye image, the preprocessing comprising: gaussian filtering processing, opening operation and closing operation;
determining the contour of the pupil in the target eye image specifically includes:
determining the contour of the pupil in the preprocessed target eye image.
In an embodiment, in a case that the circumscribed figure is a circumscribed rectangle, the obtaining of the coordinates of the tangent point of the outline and the circumscribed rectangle in the target eye image specifically includes:
acquiring a first coordinate of each target pixel point in the target eye image;
and respectively taking the first coordinate with the minimum abscissa, the first coordinate with the maximum abscissa, the first coordinate with the minimum ordinate and the first coordinate with the maximum ordinate in the first coordinates as the coordinates of the tangent points.
In one embodiment, after determining the location of the center point of the pupil in the target eye image, the method further comprises:
and marking the position of the central point in the target eye image by using a preset identification.
In a second aspect, an embodiment of the present application provides a pupil center positioning device, which includes:
an acquisition unit configured to acquire a target eye image;
a first determination unit configured to determine a contour of a pupil in the target eye image;
the second determining unit is used for determining a circumscribed figure of the outline and acquiring coordinates of a tangent point of the outline and the circumscribed figure in the target eye image;
and the third determining unit is used for determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point.
In a third aspect, an embodiment of the present application provides an electronic device, where the device includes:
a processor, a memory and a computer program stored on and executable on the memory, the computer program, when executed by the processor, implementing the steps of the pupil center positioning method as described above.
In a fourth aspect, the present application provides a computer storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the pupil center positioning method as described above.
The pupil center positioning method, the pupil center positioning device, the pupil center positioning equipment and the computer storage medium are characterized in that a target eye image is obtained firstly; then, determining the contour of the pupil in the target eye image, and determining a circumscribed graph of the contour; and finally, determining the position of the center point of the pupil in the target eye image according to the coordinates of the tangent point of the acquired contour and the circumscribed figure in the target eye image. According to the embodiment of the application, the position of the central point of the pupil is determined according to the tangent point coordinates of the pupil and the pupil circum-tangent graph, so that a mathematical fitting equation is not required to be utilized in the positioning process, the coordinate calculation is simple, a large amount of mathematical operation is not involved, the calculation amount is small, the time for calculating and positioning the central point is short, and the rapid pupil center positioning can be realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a data model of any ellipse and its circumscribed graph;
fig. 2 is a schematic flowchart of a pupil center positioning method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of step S102 of the pupil center positioning method according to the embodiment of the present application;
fig. 4a schematically shows a target eye image of an embodiment of the application, and fig. 4b schematically shows a grayscale histogram of an embodiment of the application;
fig. 5 is a schematic diagram of the pupil contour extracted in step S102 according to the embodiment of the present disclosure;
fig. 6a is an original target eye image, fig. 6b is a target eye image after gaussian filtering, fig. 6c is a target eye image after opening operation, and fig. 6d is a target eye image after closing operation;
figure 7 schematically illustrates a circumscribed figure of a pupil of an embodiment of the present application;
figure 8 schematically shows the results of pupil centre positioning of an embodiment of the present application;
fig. 9 shows a partial target eye image for pupil center positioning by using the pupil center positioning method according to the embodiment of the present application;
fig. 10 is a schematic structural diagram of a pupil center positioning device according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Pupil center positioning is widely used in various fields. For example, in the field of eye tracking, the direction and landing position of the line of sight of a human eye can be estimated by capturing the position of the center of the pupil. For example, in the field of iris recognition, the iris region can be conveniently extracted by locating the center of the pupil, so that the features such as textures and the like on the extracted iris region can be conveniently recognized. For example, in the field of psychology, by detecting measurement indexes such as pupil states and eye movement tracks, lie detection can be performed on a tester, and psychological activities of the tester can be acquired. With the rapid development of eyeball tracking, pupil identification and other technologies, pupil center positioning as a basis thereof gradually becomes a research hotspot.
In order to realize pupil center positioning, two methods are proposed to realize pupil center positioning: one is a pupil center positioning method based on Hough transform, and the other is a pupil center positioning method based on least square ellipse fitting. When the pupil center is positioned, all possible circle centers and radiuses need to be solved for each edge pixel point, a large amount of mathematical operations are involved, the time and space consumption is large, and the problems of huge calculation amount and slow positioning speed exist; and when the pupil is not perfect circle, the problem of low positioning accuracy also exists. In the pupil center positioning method based on least square ellipse fitting, a mathematical fitting equation is required to calculate the position of the pupil center point when the pupil center is positioned. Therefore, in the prior art, the position of the pupil center point is usually calculated by using a mathematical fitting equation and/or a large amount of mathematical operations, which is not only huge in calculation amount, but also slow in positioning speed.
In order to solve the problems in the prior art, after a great deal of research, the inventor proposes a technical idea: the shape of the extracted pupil contour scatter diagram is approximate to an ellipse or a circle, and the position of the pupil center point can be indirectly obtained without fitting an equation and a large amount of mathematical operation according to the characteristic that the ellipse and the circumscribed graph of the ellipse have the same center point.
To facilitate understanding and verifying the above technical idea, the following description is made in conjunction with fig. 1.
FIG. 1 is a diagram of a data model of any ellipse and its circumscribed graph. In fig. 1, a denotes a semi-major axis of the ellipse, b denotes a semi-minor axis of the ellipse, and four points P1, P2, P3 and P4 respectively denote tangents on four sides of the ellipse and the circumscribed rectangle of the ellipse. As shown in FIG. 1, taking the circumscribed figure as a circumscribed rectangle as an example, the center point of the ellipse is at the origin O, the tangent points P1(x1, y1) and P3(-x1, -y1) are symmetrical about the center point O, and the tangent points P2(x2, y2) and P4(-x2, -y2) are symmetrical about the center point O. As can be seen from the symmetry, the circumscribed rectangle ABCD of the ellipse is also centrosymmetric about the origin O, and the line connecting AC and BD intersects the origin O, i.e., the central point of the circumscribed rectangle ABCD is also the origin O. From the above demonstration analysis, the ellipse and the circumscribed figure of the ellipse have the same center point.
Based on the technical idea, embodiments of the present application provide a pupil center positioning method, apparatus, device, and computer storage medium.
The technical idea of the embodiment of the application is as follows: firstly, acquiring a target eye image; then, determining the contour of the pupil in the target eye image, and determining a circumscribed graph of the contour; and finally, determining the position of the center point of the pupil in the target eye image according to the coordinates of the tangent point of the acquired contour and the circumscribed figure in the target eye image. According to the embodiment of the application, the position of the central point of the pupil is determined according to the tangent point coordinates of the pupil and the pupil circum-tangent graph, so that a mathematical fitting equation is not required to be utilized in the positioning process, the coordinate calculation is simple, a large amount of mathematical operation is not involved, the calculation amount is small, the time for calculating and positioning the central point is short, and the rapid pupil center positioning can be realized.
First, a pupil center positioning method provided in the embodiment of the present application is described below.
Fig. 2 is a schematic flow chart illustrating a pupil center positioning method according to an embodiment of the present disclosure. As shown in fig. 2, the method may include the steps of:
and S101, acquiring a target eye image.
S102, determining the contour of the pupil in the target eye image.
S103, determining a circumscribed figure of the outline, and acquiring coordinates of a tangent point of the outline and the circumscribed figure in the target eye image.
And S104, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point.
Specific implementations of the above steps will be described in detail below.
The pupil center positioning method, the pupil center positioning device, the pupil center positioning equipment and the computer storage medium are characterized in that a target eye image is obtained firstly; then, determining the contour of the pupil in the target eye image, and determining a circumscribed graph of the contour; and finally, determining the position of the center point of the pupil in the target eye image according to the coordinates of the tangent point of the acquired contour and the circumscribed figure in the target eye image. According to the embodiment of the application, the position of the central point of the pupil is determined according to the tangent point coordinates of the pupil and the pupil circum-tangent graph, so that a mathematical fitting equation is not required to be utilized in the positioning process, the coordinate calculation is simple, a large amount of mathematical operation is not involved, the calculation amount is small, the time for calculating and positioning the central point is short, and the rapid pupil center positioning can be realized.
Specific implementations of the above steps are described below.
First, S101 is described, and a target eye image is acquired. Specifically, the target eye image may be acquired by, for example, a video camera, a still camera, or any device having a photographing function to obtain the target eye image. Of course, one or more eye images may be retrieved from the stored existing eye images as the target eye image, but the present application is not limited thereto.
In order to save the time for positioning the pupil center and realize quick positioning, as an example, the method for acquiring the target eye image in the embodiment of the present application adopts a near-eye type. Unlike the desktop type of acquiring the whole face image of a person, the near-eye type of acquiring the eye region of a person by using a camera obtains a target eye image including the eye region. Compared with the eye image acquired in a desktop mode, the method can save the time consumed by extracting the eye region in the desktop mode, so that the time for positioning the pupil center is saved, the rapid positioning is realized, the details of the acquired target eye image are clearer, and the analysis on the eye characteristics of the pupil, the iris and the like is more convenient and efficient.
The above is a specific implementation of S101, and a specific implementation of S102 is described below.
S102, determining the contour of the pupil in the target eye image.
As an example, S102 may directly process the target eye image acquired in S101 to obtain the contour of the pupil.
Fig. 3 is a flowchart illustrating step S102 of the pupil center positioning method according to the embodiment of the present application. As shown in fig. 3, S102 may specifically include the following steps:
s201, performing image binarization processing on the target eye image according to a preset threshold value to obtain a binarized image comprising pupils;
s202, screening pixel points in the binary image to obtain target pixel points on the edge of the pupil;
and S203, determining the contour of the pupil according to the target pixel point.
Steps S201 to S203 are described in order.
Fig. 4a schematically shows a target eye image of an embodiment of the application. As shown in fig. 4a, the eye structures in the target eye image are pupil, iris and sclera from inside to outside, and the sclera, iris and pupil are sequentially decreased in gray scale value. According to the gray distribution characteristics of the target eye image, in S201, a pupil region with a relatively lowest gray value can be segmented by setting a reasonable threshold value, so as to obtain a binary image including a pupil.
Specifically, in S201, the image binarization processing is performed on the target eye image according to a preset threshold, and specifically includes: and setting the gray value of the pixel point with the current gray value larger than the preset threshold value in the target eye image as 0 or 255, and setting the gray value of the pixel point with the current gray value smaller than or equal to the preset threshold value as 255 or 0 to obtain the binary image containing the pupil. Through the image binarization processing, the target eye image is converted into a binarized image having only black and white colors, for example, the pupil is black, and the region of the target eye image other than the pupil is white.
In S201, the setting of the preset threshold is the greatest importance in the step, the reasonable threshold is favorable for segmenting the pupil region, the effect of pupil segmentation is affected when the threshold is too low or too high, a defective pupil region may be obtained when the threshold is too low, and a pupil image containing an interference region may be segmented when the threshold is too high.
In view of this, in order to make the pupil area in the converted binarized image more accurate and reasonable, as an implementation manner, the embodiment of the present application determines the size of the preset threshold through the following steps:
the method comprises the following steps of firstly, obtaining the occurrence frequency of each level of gray value in a preset gray value range in a target eye image. The preset gray scale value range may be, for example, 0 to 255, and may also be other reasonable ranges, which is not limited in this application. In the first step, the number of the pixel points corresponding to each level of gray scale value in the target eye image is specifically determined, so that the occurrence frequency of each level of gray scale value in the target eye image is determined. For example, there are 1000 pixels in the target eye image, 30 pixels in the level 1 gray scale value, and 40 pixels in the level 2 gray scale value, so the number of occurrences of the level 1 gray scale value in the target eye image is 30, and the number of occurrences of the level 2 gray scale value in the target eye image is 40.
And secondly, constructing a gray level histogram of the corresponding relation between each level of gray level value and the occurrence frequency of each level of gray level value in the target eye image.
Fig. 4b schematically shows a grey histogram of an embodiment of the application. In fig. 4b, the abscissa is 0 to 255 gray scale values, and the ordinate is the number of occurrences of each gray scale value in the target eye image. After the number of occurrences of each level of gray scale value in the target eye image is determined, a gray scale histogram of the correspondence relationship between each level of gray scale value and the number of occurrences of each level of gray scale value in the target eye image is constructed. As shown in fig. 4, the gray level histograms of the pupil region and the iris region are similar to "two peaks and one valley" in morphology, because the gray level values of each of the pupil region, the iris region and the sclera region are concentrated in a gray level range, for example, the gray level value of the pupil region is concentrated in a range of 30 to 50 gray level values, for example, the gray level value of the iris region is concentrated in a range of 130 to 170 gray level values, so a "first peak" appears in the range of the pupil region where the gray level values are concentrated, and then as the gray level value increases, the number of gray level values of the pupil region decreases until a critical value between the gray level values of the pupil region and the gray level value of the iris region is reached, and after the critical value is passed, the number of gray level values of the iris region increases, and then a "second peak" of the range of the gray level value of the iris region appears.
And a third step of using the gray value corresponding to the minimum value between the first maximum value and the second maximum value of the occurrence frequency in the gray histogram as a preset threshold value.
Specifically, in the second step, a critical value is mentioned between the gray-level value of the pupil region and the gray-level value of the iris region, and this critical value is the gray-level value corresponding to the "valley" between the "first peak" and the "second peak" in the gray-level histogram. In practical applications, the "first peak" is the first maximum of the number of occurrences, the "second peak" is the second maximum of the number of occurrences, and the "valley" is the minimum between the first maximum and the second maximum of the number of occurrences. In the embodiment of the present application, this critical value is used as a preset threshold.
Continuing to refer to fig. 3, after the binarized image including the pupil is obtained through the preset threshold in S201, S202 is executed to screen the pixel points in the binarized image, and the target pixel points on the edge of the pupil are obtained.
Specifically, the edge of the object in the image appears in the place where the gray value changes most severely on the gray value, and the edge extraction or contour extraction can be generally regarded as reserving the area where the gray value changes severely in the image. And carrying out contour extraction on the binary image containing the pupil to obtain a target pixel point positioned on the edge of the pupil. In S202, the sobel edge detection, the non-maximum suppression, the dual-threshold detection, and the edge connection are performed in sequence, and finally, a target pixel point located on the edge of the pupil is obtained.
S202 specifically includes the following steps: the method comprises a sobel edge detection step, a non-maximum suppression processing step and a double-threshold detection and edge connection step.
And (3) sobel edge detection: and performing plane convolution operation on each pixel point in the binary image by using the transverse convolution factor of the Sobel convolution factor and the longitudinal convolution factor of the Sobel convolution factor to obtain the gradient amplitude of each pixel point in the binary image.
Specifically, the gradient magnitude G and the direction θ of each pixel point in the binarized image are calculated using sobel convolution factors. In the embodiment of the present application, the sobel convolution factor includes two groups of 3 × 3 matrices, which are the horizontal convolution factor Gx and the vertical convolution factor Gy, respectively, and the expression thereof is as follows:
Figure BDA0002691626850000101
Figure BDA0002691626850000102
wherein, Gx is used to detect horizontal edges, and Gy is used to detect vertical edges. And respectively performing plane convolution on the transverse convolution factor Gx and the longitudinal convolution factor Gy with each pixel point in the binary image, thereby calculating the gradient amplitude G and the gradient direction theta of each pixel point in the binary image.
The expression for calculating the gradient amplitude G and the gradient direction theta of each pixel point in the binary image is as follows:
Figure BDA0002691626850000103
Figure BDA0002691626850000104
wherein, I represents a pixel point in the binary image.
The inventor finds that after gradient calculation is carried out on each pixel point in the binary image containing the pupil, the pupil edge is directly extracted to be fuzzy according to the gradient amplitude of the pixel point. In order to avoid extracting blurred pupil edges, as an example, the embodiment of the present application may perform a "thin edge" on the pupil edge by using a non-maximum suppression processing step, specifically, find a local maximum value in the gradient amplitude of the pixel point, and suppress other gradient values except the local maximum value in the binarized image to 0, thereby rejecting a part of non-edge pixel points. For example, the pixel points are divided into a plurality of groups according to the region of the binarized image, each group includes a plurality of pixel points (e.g., 10), a preset number of pixel points (e.g., 3) with a larger gradient amplitude in each group are searched, and the gradient amplitudes of the remaining pixel points with a smaller gradient amplitude in each group are replaced with 0.
After the non-maximum suppression processing step, performing a double-threshold detection and edge join step: extracting first pixel points meeting preset conditions in the binary image as target pixel points, wherein the preset conditions comprise:
the gradient amplitude of the first pixel point is larger than a preset first threshold value;
under the condition that the gradient amplitude of the first pixel point is smaller than or equal to a preset first threshold and larger than a preset second threshold, pixel points with gradient amplitudes larger than the preset first threshold exist in eight adjacent areas of the first pixel point.
In this embodiment of the application, the first pixel point refers to any one or more pixel points that satisfy a preset condition in the binarized image.
In particular, the true and potential are determined by setting a high threshold and a low thresholdAn edge. After non-maximum suppression, the pixels left in the binarized image can more accurately represent the actual edges in the pupil. For each pixel point after the non-maximum value is restrained, the gradient amplitude of the pixel point is assumed to be G0The preset first threshold (high threshold) and the preset second threshold (low threshold) are set to be G respectively1And G2. When G is0>G1If so, the pixel point is regarded as a strong edge pixel point; when G is0<G2Then the pixel point is not considered as an edge point and is kicked; when G is2<G0<G1And if so, the pixel point is regarded as a weak edge pixel point. For weak edge pixel points, if 8 neighborhood pixels of the weak edge pixel points contain strong edge pixel points, the weak edge pixel points can be kept as real edges; if 8 neighborhood pixels of the weak edge pixel do not contain the strong edge pixel point, the pixel point is restrained, namely eliminated. Therefore, the first pixel point with the gradient amplitude value meeting the preset condition, namely the target pixel point on the edge of the pupil, can be obtained.
After the target pixel point on the edge of the pupil is obtained, S203 is executed, and the contour of the pupil is determined according to the target pixel point. For example, the target pixel point can be connected through a setting program to obtain the pupil contour.
Fig. 5 is a schematic diagram of extracting a contour of a pupil in step S102 according to the embodiment of the present application. As shown in fig. 5, the contour of the pupil composed of a plurality of target pixel points can be extracted from the binarized image including the pupil region by S102.
The above is a description that S102 may directly process the target eye image acquired in S101 to obtain the contour of the pupil in an example.
As another implementation manner of the present application, in order to avoid the influence of noise and invalid information in the target eye image on S102 and subsequent steps, before performing S102, an image preprocessing step may be further included.
Specifically, in the process of capturing an image of a target eye by a device having a shooting function such as a camera, noise and interference of invalid information may be introduced to different degrees. The noise may affect the quality of the target eye image, and the invalid information may cause difficulty in subsequent analysis and processing of the target eye image. Therefore, in order to avoid the influence of noise and invalid information in the target eye image on S102 and subsequent steps, the acquired target eye image may be preprocessed before S102 is executed. Wherein the pre-processing may comprise: gaussian filtering processing, opening operation and closing operation.
Gaussian filtering, also called gaussian smoothing, is to perform weighted average on pixels in an image according to weight distribution in a gaussian function, so as to smooth the pixel values in the image, and to present a "fuzzy" effect on the image, thereby reducing the influence of interference information on subsequent work such as S102 image processing. In the embodiment of the application, a two-dimensional zero-mean discrete gaussian function with excellent smoothing performance is selected as a smoothing filter of an image, and the expression is as follows:
Figure BDA0002691626850000121
wherein, sigma is standard deviation, also called Gaussian kernel radius, the larger the sigma value is, the more obvious the smoothing effect is; x and y are point coordinates, where x is the abscissa and y is the ordinate. In the present example, the gaussian kernel radius σ is 1.4 and the gaussian template size is 7 × 7. Gaussian filtering is carried out in a mode of 'Gaussian template sliding window convolution', the weighted average gray value of pixels in a window is used for replacing the gray value of pixels at the center point of the window, all pixel points in the image are scanned in sequence, and finally the image after Gaussian smoothing is obtained. FIG. 6a is an original target eye image; fig. 6b is the target eye image after gaussian filtering processing. As can be seen from comparison between fig. 6a and fig. 6b, after gaussian filtering, the interference information such as eyelashes and iris textures in the target eye image becomes blurred after gaussian filtering, and the influence of the interference information on subsequent steps can be reduced.
After the gaussian filtering processing, the interference information in the image is well suppressed, but there may still be fine "spots", holes, etc. in the image. In order to reduce the influence of "stains" and voids on the image, as an example, the embodiment of the present application performs expansion and erosion processing on the gaussian-filtered image in cooperation with the image morphology. The different combined processing sequences of erosion and dilation form the on and off operations in image morphology. Performing erosion processing and then dilation processing on an image is referred to as an on operation, and performing erosion processing on an image after dilation processing is referred to as an off operation.
Specifically, let f (x, y) be the input image, and b (x, y) be the structural element in the opening operation and the closing operation, as an example, the embodiment of the present application uses a square structural element with a length of 7 × 7, and the input image f is subjected to the opening operation and the closing operation by using the structural element b, and the expression is as follows:
f·b=(f⊙b)⊕b (6)
f·b=(f⊕b)⊙b (7)
wherein, the expression (6) is open operation, and the expression (7) is closed operation.
According to the embodiment of the application, the combination processing of the opening operation and the closing operation is carried out on the target eye image after the Gaussian filtering. Firstly, performing open operation on the image subjected to Gaussian filtering to filter out fine objects, disconnecting narrow connection positions and eliminating burrs, so that the boundary of a pupil area in the image is smoother. Fig. 6c is the target eye image after the on operation. As shown in fig. 6c, the burrs in the image of the target eye after the opening operation are substantially filtered out, and the boundary of the pupil area in the image is smoother. Then, on the basis of the opening operation, the image is closed again to fill the tiny holes in the pupil area and connect the adjacent objects to make up the narrow gap. Fig. 6d is the target eye image after the closing operation. As shown in fig. 6d, after a series of image preprocessing, it is obvious from comparing fig. 6a and 6d that the disturbing information in the target eye image is largely filtered out, which provides a good basis for performing the subsequent steps.
After the target eye image is preprocessed, S102 is executed to determine the contour of the pupil in the preprocessed target eye image, and the specific process may refer to the description of S102 above, which is not described herein again.
The above is a specific implementation of S102, and a specific implementation of S103 is described below.
With continued reference to fig. 2, in S103, a circumscribed figure of the contour is determined, and coordinates of a tangent point of the contour and the circumscribed figure in the target eye image are acquired.
Specifically, after a target pixel point on the edge of the pupil and the contour of the pupil are obtained in S102, a first coordinate of each target pixel point in the target eye image is obtained in S103; and respectively taking the first coordinate with the minimum abscissa, the first coordinate with the maximum abscissa, the first coordinate with the minimum ordinate and the first coordinate with the maximum ordinate in the first coordinates as the coordinates of the tangent points.
In this embodiment of the application, the first coordinate represents a coordinate of the target pixel point in the target eye image. It should be noted here that, no matter the target eye image or the binarized image after conversion of the target eye image, only the gray values of the pixel points in the two images are changed, and the coordinates of the pixel points are not changed. In other words, the coordinates of the pixel points in the binarized image are the same as the coordinates of the pixel points in the target eye image.
Therefore, after the target pixel points on the edge of the pupil are obtained, the coordinates of each target pixel point in the binary image can be obtained, and the first coordinates of each target pixel point in the target eye image are obtained.
Figure 7 schematically shows a circumscribed figure of a pupil of an embodiment of the application. As an example, as shown in fig. 7, the circumscribed figure is a circumscribed rectangle, and the contour of the pupil and the circumscribed rectangle have four tangent points, P1 ', P2', P3 'and P4', respectively. The coordinates of the four tangent points are the first coordinate with the smallest abscissa, the first coordinate with the largest abscissa, the first coordinate with the smallest ordinate and the first coordinate with the largest ordinate, namely the coordinates of the target pixel points at the leftmost side, the rightmost side, the lowermost side and the uppermost side on the contour. Here, straight lines running through the four tangent points and parallel to the x-axis and the y-axis, respectively, define a circumscribed rectangle of the contour.
The above is a specific implementation of S103, and a specific implementation of S104 is described below.
And S104, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point.
Specifically, after obtaining the coordinates of the contour of the pupil and the respective tangent points of the circumscribed figure, the coordinates or the position of the center point of the pupil in the target eye image may be obtained, for example, by calculating the mean of the coordinates of the respective tangent points.
In order to visually display the position of the center point of the pupil, as an example, the method may further include: and marking the position of the central point of the pupil in the target eye image by using a preset identification. The preset identifier may be any symbol or graphic, and the application is not limited thereto.
Fig. 8 schematically shows the result of pupil center positioning of an embodiment of the present application. As shown in fig. 8, after the coordinates or positions of the center point of the pupil in the target eye image are obtained, the position of the center point of the pupil in the target eye image may be marked with a "+" symbol.
In order to verify the feasibility and the effect of the pupil center positioning method provided by the embodiment of the present application, the inventor called 756 target eye images from the eye image database and performed traversal tests, wherein fig. 9 shows a part of target eye images for performing pupil center positioning by using the pupil center positioning method of the embodiment of the present application. As shown in fig. 9, the pupil center positioning method according to the embodiment of the present application can better position the center point of the pupil, and the positioned center point is consistent with the actual center point position and has almost no deviation, which indicates that the pupil center positioning method according to the embodiment of the present application can accurately position the center point of the pupil.
Under the same condition, the inventor respectively uses a least square method ellipse fitting pupil center positioning method and the pupil center positioning method of the embodiment of the application to perform pupil center positioning on the 756 target eye images, and the statistical result is shown in table 1.
Table 1 shows the result of performing pupil center positioning on the 756 target eye images by using the least square ellipse fitting pupil center positioning method and the pupil center positioning method in the embodiment of the present application.
Figure BDA0002691626850000151
As shown in table 1, the pupil center positioning method in the embodiment of the present application has an identification rate of 98.3%, the pupil center positioning method by least square ellipse fitting has an identification rate of 98.8%, and the identification rates of the two methods are similar; the pupil center positioning method of the embodiment of the application takes less time for positioning the pupil center on average, which shows that the pupil center positioning method of the embodiment of the application can shorten the time for positioning and realize quick positioning.
Based on the pupil center positioning method provided by the above embodiment, correspondingly, the application further provides a specific implementation manner of the pupil center positioning device. Please see the examples below.
Referring first to fig. 10, a pupil center positioning device 100 provided in an embodiment of the present application may include the following units:
an acquisition unit 1001 for acquiring a target eye image;
a first determination unit 1002 for determining the contour of the pupil in the target eye image;
a second determining unit 1003, configured to determine a circumscribed figure of the contour, and acquire coordinates of a tangent point of the contour and the circumscribed figure in the target eye image;
and a third determining unit 1004 for determining the position of the center point of the pupil in the target eye image according to the coordinates of the tangent point.
The pupil center positioning device provided by the embodiment of the application firstly acquires a target eye image; then, determining the contour of the pupil in the target eye image, and determining a circumscribed graph of the contour; and finally, determining the position of the center point of the pupil in the target eye image according to the coordinates of the tangent point of the acquired contour and the circumscribed figure in the target eye image. According to the embodiment of the application, the position of the central point of the pupil is determined according to the tangent point coordinates of the pupil and the pupil circum-tangent graph, so that a mathematical fitting equation is not required to be utilized in the positioning process, the coordinate calculation is simple, a large amount of mathematical operation is not involved, the calculation amount is small, the time for calculating and positioning the central point is short, and the rapid pupil center positioning can be realized.
As an implementation manner of the present application, in order to save the time for positioning the pupil center and achieve fast positioning, the obtaining unit 1001 may obtain the target eye image in a near-to-eye manner. Compared with the eye image acquired in a desktop mode, the method can save the time consumed by extracting the eye region in the desktop mode, so that the time for positioning the pupil center is saved, the rapid positioning is realized, the details of the acquired target eye image are clearer, and the analysis on the eye characteristics of the pupil, the iris and the like is more convenient and efficient.
As an implementation manner of the present application, the first determining unit 1002 is specifically configured to perform image binarization processing on a target eye image according to a preset threshold value to obtain a binarized image including a pupil; screening pixel points in the binary image to obtain target pixel points on the edge of the pupil; and determining the contour of the pupil according to the target pixel point.
As another implementation manner of the present application, in order to make the pupil area in the converted binarized image more accurate and reasonable, the pupil center positioning device 100 may further include: the preset threshold setting unit is used for acquiring the occurrence frequency of each level of gray value in a preset gray value range in the target eye image; constructing a gray level histogram of the corresponding relation between each level of gray level value and the occurrence frequency; and taking the gray value corresponding to the minimum value between the first maximum value and the second maximum value of the occurrence frequency in the gray histogram as a preset threshold value.
As an implementation manner of the present application, in order to accurately extract a pupil contour, the first determining unit 1002 is specifically configured to perform a planar convolution operation on each pixel point in the binarized image by using a horizontal convolution factor of the sobel convolution factor and a longitudinal convolution factor of the sobel convolution factor, so as to obtain a gradient amplitude of each pixel point in the binarized image; carrying out non-maximum suppression processing on the gradient amplitude; extracting first pixel points meeting preset conditions in the binary image as target pixel points, wherein the preset conditions comprise: the gradient amplitude of the first pixel point is larger than a preset first threshold value; under the condition that the gradient amplitude of the first pixel point is smaller than or equal to a preset first threshold and larger than a preset second threshold, pixel points with gradient amplitudes larger than the preset first threshold exist in eight adjacent areas of the first pixel point.
As another implementation manner of the present application, in order to avoid the influence of noise and invalid information in the target eye image on the subsequent steps, the pupil center positioning apparatus 100 may further include: the preprocessing unit is used for preprocessing the target eye image, and the preprocessing comprises the following steps: gaussian filtering processing, opening operation and closing operation.
As an implementation manner of the present application, the second determining unit 1003 is specifically configured to: acquiring a first coordinate of each target pixel point in the target eye image; and respectively taking the first coordinate with the minimum abscissa, the first coordinate with the maximum abscissa, the first coordinate with the minimum ordinate and the first coordinate with the maximum ordinate in the first coordinates as the coordinates of the tangent points.
As another implementation manner of the present application, in order to visually display the position of the central point of the pupil, the pupil center positioning device 100 may further include: and the marking unit is used for marking the position of the central point of the pupil in the target eye image by using the preset identification.
Each module/unit in the apparatus shown in fig. 10 has a function of implementing each step in fig. 2, and can achieve the corresponding technical effect, and for brevity, the description is not repeated here.
Based on the pupil center positioning method provided by the above embodiment, accordingly, the application further provides a specific implementation manner of the electronic device. Please see the examples below.
Fig. 11 shows a hardware structure diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 11, the electronic device may include a processor 1101 and a memory 1102 in which computer program instructions are stored.
Specifically, the processor 1101 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiments of the present Application.
Memory 1102 may include mass storage for data or instructions. By way of example, and not limitation, memory 1102 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. In one example, memory 1102 can include media that are removable or non-removable (or fixed), or memory 1102 is non-volatile solid-state memory. The memory 1102 may be internal or external to the integrated gateway disaster recovery device.
In one example, Memory 1102 may be a Read Only Memory (ROM). In one example, the ROM may be mask programmed ROM, programmable ROM (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically rewritable ROM (earom), or flash memory, or a combination of two or more of these.
Memory 1102 may include Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to the methods according to an aspect of the application.
The processor 1101 reads and executes the computer program instructions stored in the memory 1102 to implement the methods/steps S101 to S104 in the embodiment shown in fig. 2, and achieve the corresponding technical effects achieved by the implementation of the method/steps in the embodiment shown in fig. 2, which are not described herein again for brevity.
In one example, the electronic device can also include a communication interface 1103 and a bus 1110. As shown in fig. 11, the processor 1101, the memory 1102, and the communication interface 1103 are connected via a bus 1110 to complete communication therebetween.
The communication interface 1103 is mainly used for implementing communication between modules, apparatuses, units and/or devices in this embodiment of the present application.
Bus 1110 includes hardware, software, or both to couple the components of the online data traffic billing device to one another. By way of example, and not limitation, a Bus may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (Front Side Bus, FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 1110 can include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the pupil center positioning method in the foregoing embodiments, embodiments of the present application may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the pupil center positioning methods of the above embodiments.
To sum up, the pupil center positioning method, device, apparatus, and computer storage medium provided in the embodiments of the present application first obtain a target eye image; then, determining the contour of the pupil in the target eye image, and determining a circumscribed graph of the contour; and finally, determining the position of the center point of the pupil in the target eye image according to the coordinates of the tangent point of the acquired contour and the circumscribed figure in the target eye image. According to the embodiment of the application, the position of the central point of the pupil is determined according to the tangent point coordinates of the pupil and the pupil circum-tangent graph, so that a mathematical fitting equation is not required to be utilized in the positioning process, the coordinate calculation is simple, a large amount of mathematical operation is not involved, the calculation amount is small, the time for calculating and positioning the central point is short, and the rapid pupil center positioning can be realized.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic Circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (10)

1. A method for pupil center location, comprising:
acquiring a target eye image;
determining a contour of a pupil in the target eye image;
determining a circumscribed figure of the outline, and acquiring coordinates of a tangent point of the outline and the circumscribed figure in the target eye image;
and determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point.
2. The method according to claim 1, wherein the determining the contour of the pupil in the target eye image specifically comprises:
carrying out image binarization processing on the target eye image according to a preset threshold value to obtain a binarized image comprising a pupil;
screening pixel points in the binary image to obtain target pixel points on the edge of the pupil;
and determining the outline according to the target pixel point.
3. The method according to claim 2, wherein before the image binarization processing is performed on the target eye image according to the preset threshold value to obtain the binarized image comprising the pupil, the method further comprises:
acquiring the occurrence frequency of each level of gray value in a preset gray value range in the target eye image;
constructing a gray level histogram of the corresponding relation between each level of gray level value and the occurrence frequency;
and taking the gray value corresponding to the minimum value between the first maximum value and the second maximum value of the occurrence frequency in the gray histogram as the preset threshold value.
4. The method according to claim 2, wherein the step of screening the pixel points in the binarized image to obtain target pixel points on the edge of the pupil specifically comprises:
performing plane convolution operation on each pixel point in the binary image by using a transverse convolution factor of the Sobel convolution factor and a longitudinal convolution factor of the Sobel convolution factor to obtain a gradient amplitude value of each pixel point in the binary image;
carrying out non-maximum suppression processing on the gradient amplitude;
extracting first pixel points which meet preset conditions in the binarized image as the target pixel points, wherein the preset conditions comprise:
the gradient amplitude of the first pixel point is larger than a preset first threshold value;
under the condition that the gradient amplitude of the first pixel point is smaller than or equal to the preset first threshold and larger than a preset second threshold, pixel points with gradient amplitudes larger than the preset first threshold exist in eight adjacent areas of the first pixel point.
5. The method of claim 1, wherein prior to determining the contour of the pupil in the target eye image, the method further comprises:
preprocessing the target eye image, the preprocessing comprising: gaussian filtering processing, opening operation and closing operation;
determining the contour of the pupil in the target eye image specifically includes:
determining the contour of the pupil in the preprocessed target eye image.
6. The method according to claim 2, wherein in a case that the circumscribed figure is a circumscribed rectangle, the obtaining coordinates of a tangent point of the outline and the circumscribed rectangle in the target eye image specifically comprises:
acquiring a first coordinate of each target pixel point in the target eye image;
and respectively taking the first coordinate with the minimum abscissa, the first coordinate with the maximum abscissa, the first coordinate with the minimum ordinate and the first coordinate with the maximum ordinate in the first coordinates as the coordinates of the tangent points.
7. The method of claim 2, wherein after determining the location of the center point of the pupil in the target eye image, the method further comprises:
and marking the position of the central point in the target eye image by using a preset identification.
8. A pupil centering device, comprising:
an acquisition unit configured to acquire a target eye image;
a first determination unit configured to determine a contour of a pupil in the target eye image;
the second determining unit is used for determining a circumscribed figure of the outline and acquiring coordinates of a tangent point of the outline and the circumscribed figure in the target eye image;
and the third determining unit is used for determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point.
9. An electronic device, characterized in that the device comprises: processor, memory and a computer program stored on the memory and executable on the processor, which computer program, when being executed by the processor, carries out the steps of the pupil center positioning method according to any one of claims 1 to 7.
10. A computer storage medium, characterized in that a computer program is stored on the computer readable storage medium, which computer program, when being executed by a processor, carries out the steps of the pupil center positioning method according to any one of claims 1 to 7.
CN202010993486.9A 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium Active CN112258569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010993486.9A CN112258569B (en) 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010993486.9A CN112258569B (en) 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN112258569A true CN112258569A (en) 2021-01-22
CN112258569B CN112258569B (en) 2024-04-09

Family

ID=74232461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010993486.9A Active CN112258569B (en) 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112258569B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989939A (en) * 2021-02-08 2021-06-18 佛山青藤信息科技有限公司 Strabismus detection system based on vision
CN114093018A (en) * 2021-11-23 2022-02-25 河南省儿童医院郑州儿童医院 Eyesight screening equipment and system based on pupil positioning
CN115170992A (en) * 2022-09-07 2022-10-11 山东水发达丰再生资源有限公司 Image identification method and system for scattered blanking of scrap steel yard
CN115294202A (en) * 2022-10-08 2022-11-04 南昌虚拟现实研究院股份有限公司 Pupil position marking method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002000567A (en) * 2000-06-23 2002-01-08 Kansai Tlo Kk Method of measuring pupil center position and method of detecting view point position
CN101211413A (en) * 2006-12-28 2008-07-02 台正 Quick pupil center positioning method based on vision frequency image processing
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN104809458A (en) * 2014-12-29 2015-07-29 华为技术有限公司 Pupil center positioning method and pupil center positioning device
CN106326880A (en) * 2016-09-08 2017-01-11 电子科技大学 Pupil center point positioning method
CN109766818A (en) * 2019-01-04 2019-05-17 京东方科技集团股份有限公司 Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing
US20200278744A1 (en) * 2018-04-24 2020-09-03 Boe Technology Group Co., Ltd. Pupil center positioning apparatus and method, and virtual reality device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002000567A (en) * 2000-06-23 2002-01-08 Kansai Tlo Kk Method of measuring pupil center position and method of detecting view point position
CN101211413A (en) * 2006-12-28 2008-07-02 台正 Quick pupil center positioning method based on vision frequency image processing
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN104809458A (en) * 2014-12-29 2015-07-29 华为技术有限公司 Pupil center positioning method and pupil center positioning device
CN106326880A (en) * 2016-09-08 2017-01-11 电子科技大学 Pupil center point positioning method
US20200278744A1 (en) * 2018-04-24 2020-09-03 Boe Technology Group Co., Ltd. Pupil center positioning apparatus and method, and virtual reality device
CN109766818A (en) * 2019-01-04 2019-05-17 京东方科技集团股份有限公司 Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IBRAHIM FURKAN INCE 等: "A Low-Cost Pupil Center Localization Algorithm Based on Maximized Integral Voting of Circular Hollow Kernels", 《THE COMPUTER JOURNAL》, pages 1001 - 1015 *
王长元 等: "瞳孔中心快速定位方法研究", 《计算机工程与应用》, pages 196 - 198 *
赵浩然 等: "面向近眼式应用的快速瞳孔中心定位算法", 《电讯技术》, pages 1102 - 1107 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989939A (en) * 2021-02-08 2021-06-18 佛山青藤信息科技有限公司 Strabismus detection system based on vision
CN114093018A (en) * 2021-11-23 2022-02-25 河南省儿童医院郑州儿童医院 Eyesight screening equipment and system based on pupil positioning
CN115170992A (en) * 2022-09-07 2022-10-11 山东水发达丰再生资源有限公司 Image identification method and system for scattered blanking of scrap steel yard
CN115294202A (en) * 2022-10-08 2022-11-04 南昌虚拟现实研究院股份有限公司 Pupil position marking method and system

Also Published As

Publication number Publication date
CN112258569B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN112258569A (en) Pupil center positioning method, device, equipment and computer storage medium
CN115018828B (en) Defect detection method for electronic component
CN112837290B (en) Crack image automatic identification method based on seed filling algorithm
CN107220649A (en) A kind of plain color cloth defects detection and sorting technique
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN111259908A (en) Machine vision-based steel coil number identification method, system, equipment and storage medium
TWI765442B (en) Method for defect level determination and computer readable storage medium thereof
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN112669295A (en) Lithium battery pole piece defect detection method based on secondary threshold segmentation theory
CN105447489A (en) Character and background adhesion noise elimination method for image OCR system
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN115587966A (en) Method and system for detecting whether parts are missing or not under condition of uneven illumination
CN117078688B (en) Surface defect identification method for strong-magnetic neodymium-iron-boron magnet
CN114155226A (en) Micro defect edge calculation method
CN116934746B (en) Scratch defect detection method, system, equipment and medium thereof
CN116523922B (en) Bearing surface defect identification method
Shuo et al. Digital recognition of electric meter with deep learning
CN113449745B (en) Method, device and equipment for identifying marker in calibration object image and readable medium
Guo et al. Fault diagnosis of power equipment based on infrared image analysis
CN115619725A (en) Electronic component detection method and device, electronic equipment and automatic quality inspection equipment
CN114119569A (en) Imaging logging image crack segmentation and identification method and system based on machine learning
CN116309562B (en) Board defect identification method and system
CN112652004B (en) Image processing method, device, equipment and medium
CN112329572B (en) Rapid static living body detection method and device based on frame and flash point
CN112785550B (en) Image quality value determining method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230802

Address after: Room 702, Block C, Swan Tower, No. 111 Linghu Avenue, Xinwu District, Wuxi City, Jiangsu Province, 214028

Applicant after: Wuxi Tanggu Semiconductor Co.,Ltd.

Address before: 215128 unit 4-a404, creative industry park, 328 Xinghu street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant before: Suzhou Tanggu Photoelectric Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant