CN106373155A - Eyeball center positioning method, device and system - Google Patents

Eyeball center positioning method, device and system Download PDF

Info

Publication number
CN106373155A
CN106373155A CN201610799050.XA CN201610799050A CN106373155A CN 106373155 A CN106373155 A CN 106373155A CN 201610799050 A CN201610799050 A CN 201610799050A CN 106373155 A CN106373155 A CN 106373155A
Authority
CN
China
Prior art keywords
pixel
pixel point
target image
map
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610799050.XA
Other languages
Chinese (zh)
Other versions
CN106373155B (en
Inventor
张勇
何茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beta Technology Co ltd
Original Assignee
Beijing Fotoable Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fotoable Technology Ltd filed Critical Beijing Fotoable Technology Ltd
Priority to CN201610799050.XA priority Critical patent/CN106373155B/en
Publication of CN106373155A publication Critical patent/CN106373155A/en
Application granted granted Critical
Publication of CN106373155B publication Critical patent/CN106373155B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an eyeball center positioning method, device and system, a mask gray image of a target image (namely an approximate region of eyes) is obtained, the color of each pixel point in the target image is inverted, and a target inversion diagram is obtained. And calculating an x-direction gradient map and a y-direction gradient map corresponding to the target image. Then, calculating the unit gradient vector of each pixel point in the target inverse phase diagram and the Sum Sum of the square dot product of the unit gradient vector of each pixel point in the gradient diagrami,jObtaining Sum corresponding to each pixel point of the target inversion mapi,jAnd calculating the weighted average coordinate of each pixel point in the reversed phase result graph according to the pixel value of each pixel point in the reversed phase result graph and the coordinate of each pixel point, thereby determining the central coordinate of the eyeball. Therefore, the purpose of accurately determining the center position of the eyeball is achieved.

Description

Eyeball center positioning method, device and system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a system for locating an eyeball center.
Background
In the image beauty based on image processing, the beauty pupil is an important step, and the beautiful beauty pupil is added on the eyes of a person in an input image, so that the image beauty effect can be obviously improved. This requires a very precise eyeball centering technique, and a small center position deviation can cause misalignment of the fitting between the cosmetic pupil and the eyeball in the image.
Therefore, there is a need in the art for a method for highly accurate center positioning of an eyeball.
Disclosure of Invention
In view of this, the present invention provides an eyeball center positioning method, device and system, so as to overcome the problem in the prior art that the position of the eyeball center cannot be accurately determined in portrait beautifying, which results in the fitting misalignment between the beautifying pupil and the eyeball in the image.
In order to achieve the purpose, the invention provides the following technical scheme:
an eyeball center positioning method comprising:
acquiring coordinates of outsourcing contour feature points of eyes in the image;
obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline characteristic points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image;
calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map composed of the gradient in the x direction of each pixel point and a y direction gradient map composed of the gradient in the y direction of each pixel point;
normalizing each gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image;
performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction ladderDegree map, (I, J) ∈ Ω represents traversing each pixel point on the gradient map, gX '(I, J) is the pixel value of pixel point (I, J) of the normalized x-direction gradient map, gY' (I, J) is the pixel value of pixel point (I, J) of the normalized y-direction gradient map;
obtaining Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph;
reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph;
calculating weighted average coordinates of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinates of each pixel point;
and determining the eyeball center coordinate according to the weighted average coordinate.
An eyeball center positioning device comprising:
the first acquisition module is used for acquiring coordinates of outsourcing outline characteristic points of the eyes in the image;
the second acquisition module is used for acquiring a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
a third obtaining module, configured to set a pixel value of each pixel of an eye internal image surrounded by the outsourcing contour feature points in the target image to 255, and set a pixel value of each pixel of an eye external image in the target image to 0, so as to obtain a mask grayscale image;
a fourth obtaining module, configured to calculate an x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and a y-direction gradient gY (i, j) of each pixel point (i, j) in the target image, and obtain an x-direction gradient map composed of the x-direction gradients of each pixel point and a y-direction gradient map composed of the y-direction gradients of each pixel point;
a fifth obtaining module, configured to normalize each x-direction gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
a sixth obtaining module, configured to normalize each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
a seventh obtaining module, configured to invert colors of all pixel points in the target image according to the target image and the mask grayscale image, so as to obtain a target inversion diagram;
a first calculating module, configured to perform the following operations for each pixel point (i, j) in the target inverse image, where the pixel value is nonzero:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, wherein (I, J) ∈ Ω represents traversing each pixel point on the gradient map, gX '(I, J) is the pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is the pixel value of the pixel point (I, J) of the normalized y-direction gradient map;
an eighth obtaining module, configured to obtain Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
a ninth obtaining module, configured to normalize each pixel value in the result map to a range from 0 to 255, to obtain a normalized result map;
a tenth obtaining module, configured to invert the color of each pixel in the normalized result graph to obtain an inverse result graph;
the second calculation module is used for calculating the weighted average coordinate of each pixel point in the inverse result graph according to the pixel value of each pixel point in the inverse result graph and the coordinate of each pixel point;
and the determining module is used for determining the eyeball center coordinate according to the weighted average coordinate.
An eye center positioning system comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
acquiring coordinates of outsourcing contour feature points of eyes in the image;
obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline characteristic points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image;
calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map composed of the gradient in the x direction of each pixel point and a y direction gradient map composed of the gradient in the y direction of each pixel point;
normalizing each gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image;
performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, wherein (I, J) ∈ Ω represents traversing each pixel point on the gradient map, gX '(I, J) is the pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is the pixel value of the pixel point (I, J) of the normalized y-direction gradient map;
obtaining Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph;
reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph;
calculating weighted average coordinates of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinates of each pixel point;
and determining the eyeball center coordinate according to the weighted average coordinate.
Through the technical scheme, compared with the prior art, the method is implementedIn the eyeball center positioning method, a target image containing eyes is obtained through coordinates of the outsourcing outline characteristic points of the eyes, then a mask gray level image of the target image is obtained, and the color of each pixel point in the target image is reversed according to the target image and the mask gray level image by utilizing the characteristic that the eyeballs are black, so that a target inverse phase diagram is obtained. Calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map and a y direction gradient map; then, normalizing each x-direction gradient in the x-direction gradient map to obtain a normalized x-direction gradient map, and normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map. Then, calculating the unit gradient vector G of the pixel point (i, j)i,jSum of squared dot products Sum of Sum of squared dot products (Sum) with unit position vector of each pixel point (I, J) in gradient map, (gX '(I, J), gY' (I, J))i,jObtaining Sum corresponding to each pixel point of the target inversion mapi,jNormalizing each pixel value in the result graph to be in a range of 0-255 as a result graph consisting of pixel values to obtain a normalized result graph, inverting the color of each pixel point in the normalized result graph to obtain an inverted result graph, calculating weighted average coordinates of each pixel point in the inverted result graph according to the pixel value of each pixel point in the inverted result graph and the coordinates of each pixel point, and determining the eyeball center coordinates according to the weighted average coordinates. Thereby realizing the purpose of accurately determining the center of the eyeball.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of an eyeball center positioning method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an eye's outer contour feature provided in an embodiment of the present application;
FIG. 3 is a gray scale image of a mask provided by an embodiment of the present application;
FIG. 4 is a graph of the gradient in the x-direction according to an embodiment of the present application;
FIG. 5 is a y-direction gradient diagram provided by an embodiment of the present application;
FIG. 6 is a graph of amplitude provided by an embodiment of the present application;
FIG. 7 is a target inversion diagram provided by an embodiment of the present application;
FIG. 8 is a graph of inversion results provided by an embodiment of the present application;
fig. 9 is a schematic flowchart of an implementation manner of obtaining a target image including an eye according to coordinates of the outsourcing contour feature points in an eyeball center positioning method provided in the embodiment of the present application;
fig. 10 is a schematic structural diagram of an eyeball center positioning device provided in the embodiment of the application;
fig. 11 is a schematic structural diagram of an implementation manner of a second obtaining module in an eyeball center positioning device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of an eyeball center positioning method according to an embodiment of the present disclosure is shown, where the method includes:
step S101: coordinates of the outsourcing contour feature points of the eye in the image are acquired.
There are many ways to obtain the outsourced contour feature points of the eye, such as the asm (active Shape model) method or the neural network method.
Fig. 2 is a schematic diagram of an eye outline feature point provided in an embodiment of the present application.
As shown in fig. 2, the location 21 is the eye's outer contour feature point.
Step S102: and obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points.
The target image is an approximate image including eyes, as shown in fig. 2, and is a target image in an implementation manner in the embodiment of the present application.
Step S103: and setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline feature points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image.
The image of the inside of the eye surrounded by the outer contour feature points can be obtained by utilizing Seal curve drawing.
Fig. 3 shows a gray scale image of a mask provided in an embodiment of the present application.
As can be seen in fig. 3, the mask grayscale image includes an eye-internal image 31, and an eye-external image 32.
Step S104: and calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map formed by the gradient in the x direction of each pixel point and a y direction gradient map formed by the gradient in the y direction of each pixel point.
The calculation method of the x-direction gradient map and the y-direction gradient map can adopt first order differential operators such as Sobel, Prewitt and the like, and the Sobel operator is taken as an example to be explained below, and the x-direction gradient map and the y-direction gradient map can be calculated through the following formulas:
wherein Src (i, j) represents a pixel value of a pixel point (i, j) in the target image,
fig. 4 shows an x-direction gradient chart provided in the embodiments of the present application, and fig. 5 shows a y-direction gradient chart provided in the embodiments of the present application. Fig. 6 shows an amplitude diagram provided in the embodiment of the present application.
The pixel value of each pixel point in the amplitude map is calculated according to the following formula:
mag i , j = ( g Y ( i , j ) ) 2 + ( g X ( i , j ) ) 2
magi,jthe pixel value of the pixel point (i, j) in the amplitude map.
Step S105: and normalizing each gradient in the x direction gradient map to obtain a normalized x direction gradient map.
The normalization may be a maximum-minimum normalization, for example, the maximum value corresponds to 1 and the minimum value corresponds to 0.
Step S106: and normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map.
The normalization may be a maximum-minimum normalization, for example, the maximum value corresponds to 1 and the minimum value corresponds to 0.
Step S107: and reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image.
The target inversion diagram can be obtained according to the following formula:
Weighti,j=(255-Srci,j)×Maski,j(255); among them, Weighti,jIs the pixel value, Src, of a pixel point (i, j) in the target inverse mapi,jIs the pixel value of the pixel point (i, j) in the target image, Maski,jAnd (3) the pixel value of the pixel point (i, j) in the mask gray level image.
The characteristic that the eyeball is black is utilized, the color of the target image is inverted, the eyeball becomes white, the gray value of the color of the eyeball is low, the target inverse image of the target image is used as a priori weight, and the y-direction gradient image, the x-direction gradient image and the target inverse image are fully combined, so that the precision of the eyeball center positioning method is greatly improved.
Fig. 7 shows a target inversion diagram provided in the embodiment of the present application.
Comparing fig. 4, 5, 6 and 7, it can be seen that the eyeball changes from gray to white.
Step S108: performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map.
Step S109: obtaining Sum corresponding to each pixel point of the target inversion mapi,jAs a result map of pixel value composition.
Step S110: and normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph.
Step S111: and reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph.
The inversion result graph can be obtained by the following formula:
Sumi',j=255-255×(Sumi,j-minSum)/(maxSum-minSum);
wherein, Sumi',jIs the pixel value, Sum, of a pixel point (i, j) in the inversion result mapi,jAnd the pixel value of the pixel point (i, j) in the result graph is shown, minSum is the minimum pixel value in the result graph, and maxSum is the maximum pixel value in the result graph.
Fig. 8 is a graph showing an inversion result provided in the embodiment of the present application.
As can be seen from fig. 8, the color of the center of the eyeball is significantly different from the color of the other regions.
Step S112: and calculating the weighted average coordinate of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinate of each pixel point.
The weighted average coordinate may be obtained by the following formula:
wherein, Sumi',jThe pixel value of a pixel point (i, j) in the inversion result graph, (Vi, Vj) is the weighted average coordinate, and a function f (Sum'i,j) Is a mapping function for mapping the pixel value of each pixel point of the inversion result graph from 0 to 255 to a preset range, wherein Sum'i,jThe smaller, f (Sum'i,j) The larger.
Optionally, f (x) e-0.01*x
According to the method, a method for calculating the optimal point by using the maximum value in the prior art is abandoned, and the stability of the eyeball center positioning method is remarkably improved by adopting a weighted average method of coordinate point positions.
Step S113: and determining the eyeball center coordinate according to the weighted average coordinate.
In the embodiment of the present application, in the target image, the mask gray image, the x-direction gradient map, the y-direction gradient map, the normalized x-direction gradient map, the normalized y-direction gradient map, the target inversion map, the result map, the normalized result map, and the inversion result map, the total number of rows and the total number of columns of the pixel points are the same, so that i is a positive integer greater than or equal to 0 and less than M, and j is a positive integer greater than or equal to 0 and less than N.
According to the eyeball center positioning method, the target image containing the eyes is obtained through the coordinates of the outsourcing outline feature points of the eyes, then the mask gray level image of the target image is obtained, and the color of each pixel point in the target image is reversed according to the target image and the mask gray level image by utilizing the characteristic that the eyeballs are black, so that the target inverse phase diagram is obtained. Calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map and a y direction gradient map; then, normalizing each x-direction gradient in the x-direction gradient map to obtain a normalized x-direction gradient map, and normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map. Then, calculating the unit gradient vector G of the pixel point (i, j)i,jSum of squared dot products Sum of Sum of squared dot products (Sum) with unit position vector of each pixel point (I, J) in gradient map, (gX '(I, J), gY' (I, J))i,jObtaining Sum corresponding to each pixel point of the target inversion mapi,jNormalizing each pixel value in the result graph to be in a range of 0-255 as a result graph consisting of pixel values to obtain a normalized result graph, inverting the color of each pixel point in the normalized result graph to obtain an inverted result graph, calculating weighted average coordinates of each pixel point in the inverted result graph according to the pixel value of each pixel point in the inverted result graph and the coordinates of each pixel point, and determining the eyeball center coordinates according to the weighted average coordinates. Thereby realizing the purpose of accurately determining the center of the eyeball.
It can be understood that, the larger the area of the target image and the mask gray-scale image is, the slower the operation speed is, and to increase the operation speed, if the area a of the target image is greater than the area threshold a, the aspect ratio between the target image and the mask gray-scale image is not changed and is reduced to the area a, please refer to fig. 9, which is a schematic flow chart of an implementation manner of obtaining the target image including the eye according to the coordinates of the outsourcing contour feature point in an eyeball center positioning method provided in the embodiment of the present application, the method includes:
step S901: and obtaining a circumscribed rectangle of the outsourcing outline characteristic points.
Step S902: and determining the area surrounded by the circumscribed rectangle as a quasi-target image.
In order to fully include the eye region, a circumscribed rectangle of the outline feature points is taken as a quasi-target image.
Step S903: and judging whether the image area of the quasi-target image is larger than an area threshold value.
Step S904: when the image area a of the quasi-target image is larger than the area threshold A, the length and the width of the quasi-target image are scaled according to a scaling coefficientAnd zooming to obtain the zoomed target image.
Step S905: when the image area a of the quasi-target image is smaller than or equal to the area threshold A, determining the quasi-target image as the target image.
Correspondingly, determining the eyeball center coordinate according to the weighted average coordinate comprises: when the target image is the quasi-target image, determining the weighted average coordinate as the eyeball center coordinate; and when the target image is the image obtained by scaling the quasi-target image, taking the product of the weighted average coordinate and the scaling coefficient rate as the eyeball center coordinate.
It can be understood that iteration can be performed on the eyeball center positioning method, and the larger the iteration number is, the greater the accuracy of eyeball center positioning may be, and certainly, the lower the accuracy of eyeball center positioning may be, so it is important to find a suitable iteration number.
Before step S104, the method for locating an eyeball center further includes: setting the maximum iteration times, and setting the current iteration times as 0; after step S111, the eyeball center positioning method further includes:
adding 1 to the current iteration number; judging whether the current iteration times are more than or equal to the maximum iteration times; when the current iteration number is greater than or equal to the maximum iteration number, executing step S112; and when the current iteration number is smaller than the maximum iteration number, taking the inversion result graph as the target image, and returning to the step S104.
The maximum number of iterations may be a positive integer greater than or equal to 1 and less than or equal to 4, or may be another positive integer, such as 5, 6, etc.
In order to make those skilled in the art more aware of the accuracy of the eyeball center positioning method provided in the embodiments of the present application, the applicant has also conducted experiments.
36000 face images are extracted, tests are carried out by the eyeball center positioning method, when the maximum iteration number NMax is set to be 3 and the area threshold value A is set to be 1000, the average error Meanerror is 0.1217 r, the standard deviation Stderror is 0.1086 r, and r is the radius of the eyeball.
The mean computation time for individual graphs was only 2.57ms measured on a Macbook Pro (Retina,15-inch, Mid 2015), OS X10.11, XCode7.3.
In practical applications, in order to meet the application requirements of the cosmetic pupil, the average error Meanerror of the eyeball center positioning is usually required to be less than 0.15 × r, and the standard deviation Stderror is required to be less than 0.15 × r.
Please refer to fig. 10, which is a schematic structural diagram of an eyeball center positioning device according to an embodiment of the present application, the eyeball center positioning device includes: a first obtaining module 1001, a second obtaining module 1002, a third obtaining module 1003, a fourth obtaining module 1004, a fifth obtaining module 1005, a sixth obtaining module 1006, a seventh obtaining module 1007, a first calculating module 1008, an eighth obtaining module 1009, a ninth obtaining module 1010, a tenth obtaining module 1011, a second calculating module 1012, and a determining module 1013, wherein:
a first obtaining module 1001 configured to obtain coordinates of an outsourcing contour feature point of an eye in an image.
There are many ways to obtain the outsourced contour feature points of the eye, such as the asm (active Shape model) method or the neural network method.
Reference may be made to fig. 2, which is not described in detail here.
A second obtaining module 1002, configured to obtain a target image including an eye according to the coordinates of the outsourcing contour feature point.
A third obtaining module 1003, configured to set a pixel value of each pixel of the eye internal image surrounded by the outline-outsourcing feature point in the target image to 255, and set a pixel value of each pixel of the eye external image in the target image to 0, so as to obtain a mask grayscale image.
The image of the inside of the eye surrounded by the outer contour feature points can be obtained by utilizing Seal curve drawing.
A fourth obtaining module 1004, configured to calculate an x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and a y-direction gradient gY (i, j) of each pixel point (i, j) in the target image, and obtain an x-direction gradient map composed of the x-direction gradients of each pixel point and a y-direction gradient map composed of the y-direction gradients of each pixel point.
The calculation method of the x-direction gradient map and the y-direction gradient map may use first order differential operators such as Sobel and Prewitt, which is described below by taking the Sobel operator as an example, and the fourth obtaining module 1004 may include: a first obtaining unit for calculating an x-direction gradient map and a y-direction gradient map by the following formulas:
wherein Src (i, j) represents a pixel value of a pixel point (i, j) in the target image,
a fifth obtaining module 1005, configured to normalize each x-direction gradient in the x-direction gradient map, to obtain a normalized x-direction gradient map.
A sixth obtaining module 1006, configured to normalize each y-direction gradient in the y-direction gradient map, to obtain a normalized y-direction gradient map.
A seventh obtaining module 1007, configured to invert the color of each pixel in the target image according to the target image and the mask grayscale image, so as to obtain a target inversion diagram.
The seventh obtaining module 1007 may include a second obtaining unit configured to obtain a target inversion map according to the following formula:
Weighti,j=(255-Srci,j)×Maski,j(255); among them, Weighti,jIs the pixel value, Src, of a pixel point (i, j) in the target inverse mapi,jIs the pixel value of the pixel point (i, j) in the target image, Maski,jAnd (3) the pixel value of the pixel point (i, j) in the mask gray level image.
A first calculating module 1008, configured to perform the following operations for each pixel point (i, j) in the target inverse image whose pixel value is non-zero:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map.
An eighth obtaining module 1009, configured to obtain Sum corresponding to each pixel point of the target inversion mapi,jAs a result map of pixel value composition.
A ninth obtaining module 1010, configured to normalize each pixel value in the result map to a range from 0 to 255, and obtain a normalized result map.
A tenth obtaining module 1011, configured to invert the color of each pixel in the normalized result graph, so as to obtain an inverse result graph.
The tenth obtaining module 1011 may include a third obtaining unit for obtaining an inverse result graph by the following formula:
Sumi',j=255-255×(Sumi,j-minSum)/(maxSum-minSum);
wherein, Sumi',jIs the pixel value, Sum, of a pixel point (i, j) in the inversion result mapi,jAnd the pixel value of the pixel point (i, j) in the result graph is shown, minSum is the minimum pixel value in the result graph, and maxSum is the maximum pixel value in the result graph.
A second calculating module 1012, configured to calculate a weighted average coordinate of each pixel point in the inverse result graph according to the pixel value of each pixel point in the inverse result graph and the coordinate of each pixel point.
The second calculation module 1012 may include a fourth obtaining unit for obtaining the weighted average coordinate by the following formula:
wherein, Sumi',jThe pixel value of a pixel point (i, j) in the inversion result graph, (Vi, Vj) is the weighted average coordinate, and a function f (Sum'i,j) Is a mapping function for mapping the pixel value of each pixel point of the inversion result graph from 0 to 255 to a preset range, wherein Sum'i,jThe smaller, f (Sum'i,j) The larger.
A determining module 1013, configured to determine the eyeball center coordinate according to the weighted average coordinate.
In the eyeball center positioning device provided in the embodiment of the present application, the second obtaining module 1002 obtains a target image including eyes through coordinates of the outsourcing contour feature points of the eyes, and then the third obtaining module 1003 obtains a mask gray image of the target image, and by using the characteristic that the eyeballs are black, the seventh obtaining module 1007 inverts the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target inversion diagram. The fourth obtaining module 1004 calculates an x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and a y-direction gradient gY (i, j) of each pixel point (i, j) in the target image to obtain an x-direction gradient map and a y-direction gradient map; then, the fifth obtaining module 1005 normalizes each gradient in the x-direction in the gradient map in the x-direction to obtain a normalized x-direction gradient map, and the sixth obtaining module 1006 normalizes each gradient in the y-direction in the gradient map in the y-direction to obtain a normalized y-direction gradient map. The first calculation module 1008 calculates the unit gradient vector G of the pixel point (i, j)i,jSum of squared dot products Sum of Sum of squared dot products (Sum) with unit position vector of each pixel point (I, J) in gradient map, (gX '(I, J), gY' (I, J))i,jThe eighth obtaining module 1009 obtains Sum corresponding to each pixel point of the target inversion mapi,jThe ninth acquisition module 1010 integrates the junctions as a result map of pixel value compositionThe pixel values in the result graph are normalized to the range of 0 to 255 to obtain a normalized result graph, the tenth obtaining module 1011 inverts the color of each pixel point in the normalized result graph to obtain an inverse result graph, the second calculating module 1012 calculates the weighted average coordinate of each pixel point in the inverse result graph according to the pixel values of each pixel point in the inverse result graph and the coordinates of each pixel point, and the determining module 1013 determines the eyeball center coordinate according to the weighted average coordinate. Thereby realizing the purpose of accurately determining the center of the eyeball.
Please refer to fig. 11, which is a schematic structural diagram of an implementation manner of a second obtaining module in an eyeball center positioning device according to an embodiment of the present application, where the second obtaining module includes: a fifth acquisition unit 1101, a first determination unit 1102, a determination unit 1103, a scaling unit 1104, and a second determination unit 1105, wherein:
a fifth obtaining unit 1101, configured to obtain a circumscribed rectangle of the outsourcing outline feature point.
A first determining unit 1102, configured to determine an area surrounded by the circumscribed rectangle as a quasi-target image.
A first determining unit 1103, configured to determine whether an image area of the quasi-target image is larger than an area threshold.
A scaling unit 1104 for, when the image area a of the quasi-target image is larger than the area threshold A, scaling the length and width of the quasi-target image by a scaling factorAnd zooming to obtain the zoomed target image.
A second determining unit 1105, configured to determine the quasi-target image as the target image when the image area a of the quasi-target image is smaller than or equal to the area threshold a.
Accordingly, the determining module 1013 includes: a third determining unit, configured to determine the weighted average coordinate as the eyeball center coordinate when the target image is the quasi-target image; a fourth determining unit, configured to, when the target image is an image obtained by scaling the quasi-target image, use a product of the weighted average coordinate and a scaling coefficient rate as the eyeball center coordinate.
It can be understood that iteration can be performed on the eyeball center positioning method, and the larger the iteration number is, the greater the accuracy of eyeball center positioning may be, and certainly, the lower the accuracy of eyeball center positioning may be, so it is important to find a suitable iteration number.
The eyeball center positioning device may further include: and the setting module is used for setting the maximum iteration times and setting the current iteration times as 0. The adding module is used for adding 1 to the current iteration times; the judging module is used for judging whether the current iteration times are more than or equal to the maximum iteration times; a first triggering module, configured to trigger the second calculating module 1012 when the current iteration number is greater than or equal to the maximum iteration number; and a second triggering module, configured to, when the current iteration number is smaller than the maximum iteration number, take the inverse result graph as the target image, and trigger the fourth obtaining module 1004.
The embodiment of the present application further provides an eyeball center positioning system, and the eyeball center positioning system includes: a processor and a memory, wherein:
a memory to store the processor-executable instructions.
The processor is configured to:
coordinates of the outsourcing contour feature points of the eye in the image are acquired.
And obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points.
And setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline feature points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image.
And calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map formed by the gradient in the x direction of each pixel point and a y direction gradient map formed by the gradient in the y direction of each pixel point.
And normalizing each gradient in the x direction gradient map to obtain a normalized x direction gradient map.
And normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map.
And reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image.
Performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map.
Obtaining Sum corresponding to each pixel point of the target inversion mapi,jAs a result map of pixel value composition.
And normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph.
And reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph.
And calculating the weighted average coordinate of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinate of each pixel point.
And determining the eyeball center coordinate according to the weighted average coordinate.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An eyeball center positioning method, comprising:
acquiring coordinates of outsourcing contour feature points of eyes in the image;
obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline characteristic points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image;
calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map composed of the gradient in the x direction of each pixel point and a y direction gradient map composed of the gradient in the y direction of each pixel point;
normalizing each gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image;
performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map;
obtaining Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph;
reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph;
calculating weighted average coordinates of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinates of each pixel point;
and determining the eyeball center coordinate according to the weighted average coordinate.
2. The eyeball center positioning method according to claim 1, wherein the obtaining of the target image containing the eyes according to the coordinates of the outsourcing outline feature points comprises:
obtaining a circumscribed rectangle of the outsourcing outline characteristic points;
determining the area surrounded by the circumscribed rectangle as a quasi-target image;
judging whether the image area of the quasi-target image is larger than an area threshold value or not;
when the image area a of the quasi-target image is larger than the area threshold A, the length and the width of the quasi-target image are scaled according to a scaling coefficientZooming to obtain the zoomed target image;
when the image area a of the quasi-target image is smaller than or equal to the area threshold A, determining the quasi-target image as the target image.
3. The method according to claim 2, wherein the determining the eyeball center coordinates according to the weighted average coordinates comprises:
when the target image is the quasi-target image, determining the weighted average coordinate as the eyeball center coordinate;
and when the target image is the image obtained by scaling the quasi-target image, taking the product of the weighted average coordinate and the scaling coefficient rate as the eyeball center coordinate.
4. The method for locating the center of an eye according to claim 1, further comprising, before the calculating the x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and the y-direction gradient gY (i, j) of each pixel point (i, j) in the target image:
setting the maximum iteration times, and setting the current iteration times as 0;
after the reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph, the method further comprises:
adding 1 to the current iteration number;
judging whether the current iteration times are more than or equal to the maximum iteration times;
when the current iteration times is larger than or equal to the maximum iteration times, the execution step calculates the weighted average coordinate of each pixel point in the inverse result graph according to the pixel value of each pixel point in the inverse result graph and the coordinate of each pixel point;
and when the current iteration number is smaller than the maximum iteration number, taking the inverted result graph as the target image, and returning to the step of calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image.
5. The eyeball center positioning method according to any one of claims 1 to 4, wherein the calculating of the x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and the y-direction gradient gY (i, j) of each pixel point (i, j) in the target image to obtain an x-direction gradient map composed of the x-direction gradients of each pixel point, and the y-direction gradient map composed of the y-direction gradients of each pixel point comprises:
gX ( i , j ) = SobelX ( Src ( i , j ) ) ; gY ( i , j ) = SobelY ( Src ( i , j ) ) ; ;
wherein,representing pixel values of pixel points (i, j) in the target image,
6. the eyeball center positioning method according to claims 1 to 4, wherein the inverting the color of each pixel point in the target image according to the target image and the mask gray scale image to obtain a target inversion map comprises:
Weighti,j=(255-Srci,j)×Maski,j/255;
among them, Weighti,jIs the pixel value, Src, of a pixel point (i, j) in the target inverse mapi,jFor pixels in the target imageThe pixel value, Mask, of the point (i, j)i,jAnd (3) the pixel value of the pixel point (i, j) in the mask gray level image.
7. The eyeball center positioning method according to claims 1 to 4, wherein the inverting the color of each pixel point in the normalized result map to obtain an inverted result map comprises:
Sum′i,j=255-255×(Sumi,j-min Sum)/(max Sum-min Sum);
wherein, Sum'i,jIs the pixel value, Sum, of a pixel point (i, j) in the inversion result mapi,jAnd the pixel value of the pixel point (i, j) in the result graph is shown, min Sum is the minimum pixel value in the result graph, and max Sum is the maximum pixel value in the result graph.
8. The method as claimed in claims 1 to 4, wherein the calculating the weighted average coordinate of each pixel in the inverse result map according to the pixel value of each pixel in the inverse result map and the coordinate of each pixel comprises:
V i = Σ ( i , j ) ∈ Ω ( f ( Sum i , j ′ ) × j ) / Σ ( i , j ) ∈ Ω f ( Sum i , j ′ ) V j = Σ ( i , j ) ∈ Ω ( f ( Sum i , j ′ ) × i ) / Σ ( i , j ) ∈ Ω f ( Sum i , j ′ ) ;
wherein, Sum'i,jThe pixel value of a pixel point (i, j) in the inversion result graph, (Vi, Vj) is the weighted average coordinate, and a function f (Sum'i,j) Is a mapping function for mapping the pixel value of each pixel point of the inversion result graph from 0 to 255 to a preset range, wherein Sum'i,jThe smaller, f (Sum'i,j) The larger.
9. An eyeball center positioning device, comprising:
the first acquisition module is used for acquiring coordinates of outsourcing outline characteristic points of the eyes in the image;
the second acquisition module is used for acquiring a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
a third obtaining module, configured to set a pixel value of each pixel of an eye internal image surrounded by the outsourcing contour feature points in the target image to 255, and set a pixel value of each pixel of an eye external image in the target image to 0, so as to obtain a mask grayscale image;
a fourth obtaining module, configured to calculate an x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and a y-direction gradient gY (i, j) of each pixel point (i, j) in the target image, and obtain an x-direction gradient map composed of the x-direction gradients of each pixel point and a y-direction gradient map composed of the y-direction gradients of each pixel point;
a fifth obtaining module, configured to normalize each x-direction gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
a sixth obtaining module, configured to normalize each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
a seventh obtaining module, configured to invert colors of all pixel points in the target image according to the target image and the mask grayscale image, so as to obtain a target inversion diagram;
a first calculating module, configured to perform the following operations for each pixel point (i, j) in the target inverse image, where the pixel value is nonzero:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map;
an eighth obtaining module, configured to obtain Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
a ninth obtaining module, configured to normalize each pixel value in the result map to a range from 0 to 255, to obtain a normalized result map;
a tenth obtaining module, configured to invert the color of each pixel in the normalized result graph to obtain an inverse result graph;
the second calculation module is used for calculating the weighted average coordinate of each pixel point in the inverse result graph according to the pixel value of each pixel point in the inverse result graph and the coordinate of each pixel point;
and the determining module is used for determining the eyeball center coordinate according to the weighted average coordinate.
10. An eye center positioning system, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
acquiring coordinates of outsourcing contour feature points of eyes in the image;
obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline characteristic points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image;
calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map composed of the gradient in the x direction of each pixel point and a y direction gradient map composed of the gradient in the y direction of each pixel point;
normalizing each gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image;
performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,jThe gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map;
obtaining Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph;
reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph;
calculating weighted average coordinates of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinates of each pixel point;
and determining the eyeball center coordinate according to the weighted average coordinate.
CN201610799050.XA 2016-08-31 2016-08-31 Eyeball center positioning method, device and system Active CN106373155B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610799050.XA CN106373155B (en) 2016-08-31 2016-08-31 Eyeball center positioning method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610799050.XA CN106373155B (en) 2016-08-31 2016-08-31 Eyeball center positioning method, device and system

Publications (2)

Publication Number Publication Date
CN106373155A true CN106373155A (en) 2017-02-01
CN106373155B CN106373155B (en) 2019-10-22

Family

ID=57899267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610799050.XA Active CN106373155B (en) 2016-08-31 2016-08-31 Eyeball center positioning method, device and system

Country Status (1)

Country Link
CN (1) CN106373155B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107249126A (en) * 2017-07-28 2017-10-13 华中科技大学 A kind of gazing direction of human eyes tracking suitable for free view-point 3 D video
CN108564531A (en) * 2018-05-08 2018-09-21 麒麟合盛网络技术股份有限公司 A kind of image processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1422596A (en) * 2000-08-09 2003-06-11 松下电器产业株式会社 Eye position detection method and apparatus thereof
US9311527B1 (en) * 2011-07-14 2016-04-12 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
CN105512603A (en) * 2015-01-20 2016-04-20 上海伊霍珀信息科技股份有限公司 Dangerous driving detection method based on principle of vector dot product

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1422596A (en) * 2000-08-09 2003-06-11 松下电器产业株式会社 Eye position detection method and apparatus thereof
US9311527B1 (en) * 2011-07-14 2016-04-12 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
CN105512603A (en) * 2015-01-20 2016-04-20 上海伊霍珀信息科技股份有限公司 Dangerous driving detection method based on principle of vector dot product

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FABIAN TIMM 等: "ACCURATE EYE CENTRE LOCALISATION BY MEANS OFG RADIENTS", 《VISAPP 2011-PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE ON COMPUTER VISON THEORY AND APPLICATIONS, VILAMOURA,ALGARVE,PORTUGAL》 *
周丽芳 等: "《模式识别原理及工程应用》", 30 June 2013, 北京:机械工业出版社 *
张敏 等: "人脸图像中人眼的检测与定位", 《光电工程》 *
陆玲 等: "《数字图像处理》", 31 July 2007, 北京:中国电力出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107249126A (en) * 2017-07-28 2017-10-13 华中科技大学 A kind of gazing direction of human eyes tracking suitable for free view-point 3 D video
CN108564531A (en) * 2018-05-08 2018-09-21 麒麟合盛网络技术股份有限公司 A kind of image processing method and device

Also Published As

Publication number Publication date
CN106373155B (en) 2019-10-22

Similar Documents

Publication Publication Date Title
Zea et al. Level-set random hypersurface models for tracking nonconvex extended objects
CN104008538B (en) Based on single image super-resolution method
CN101465002B (en) Method for orientating secondary pixel edge of oval-shaped target
CN104268591A (en) Face key point detecting method and device
CN104881866A (en) Fisheye camera rectification and calibration method for expanding pin-hole imaging model
CN105894521A (en) Sub-pixel edge detection method based on Gaussian fitting
CN106778660B (en) A kind of human face posture bearing calibration and device
Biolè et al. A goniometric mask to measure contact angles from digital images of liquid drops
US20160371817A1 (en) Image processing apparatus for image processing based on accurate fundamental matrix
CN101789119A (en) Method and device for determining filter coefficients in process of image interpolation
CN106373155A (en) Eyeball center positioning method, device and system
CN114445482B (en) Method and system for detecting target in image based on Libra-RCNN and elliptical shape characteristics
CN104657994A (en) Image consistency judging method and system based on optical flow method
CN107704847A (en) A kind of detection method of face key point
CN104915973A (en) Method for solving center of regular circle in image
Carr et al. Semi-closed form solutions for barrier and American options written on a time-dependent Ornstein Uhlenbeck process
US20190378251A1 (en) Image processing method
Xu et al. Retinal vessel width measurements based on a graph-theoretic method
Jumaat et al. Performance comparison of Canny and Sobel edge detectors on Balloon Snake in segmenting masses
US10885635B2 (en) Curvilinear object segmentation with noise priors
CN106293270A (en) A kind of scaling method of giant-screen touch-control system
US9959635B2 (en) State determination device, eye closure determination device, state determination method, and storage medium
CN109901716B (en) Sight point prediction model establishing method and device and sight point prediction method
US20200193605A1 (en) Curvilinear object segmentation with geometric priors
CN105260717A (en) Eyeball tracking method utilizing iris center positioning based on convolution kernel and circle boundary calculus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100192, C, room 4, building B-6, building No. 403, Zhongguancun Dongsheng science and Technology Park, Dongsheng Road, Haidian District, 66, Beijing,

Applicant after: Beijing beta Polytron Technologies Inc

Address before: 100000, C, building 4, building B6, Dongsheng Science Park, No. 66 Xiao Dong Road, Beijing, Haidian District

Applicant before: Beijing Yuntu Weidong Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100192 rooms c402 and 403, 4 / F, building C, building B-6, Dongsheng Science Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee after: Beijing beta Technology Co.,Ltd.

Address before: 100192 rooms c402 and 403, 4 / F, building C, building B-6, Dongsheng Science Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee before: BEIJING FOTOABLE TECHNOLOGY LTD.

CP01 Change in the name or title of a patent holder