Disclosure of Invention
In view of this, the present invention provides an eyeball center positioning method, device and system, so as to overcome the problem in the prior art that the position of the eyeball center cannot be accurately determined in portrait beautifying, which results in the fitting misalignment between the beautifying pupil and the eyeball in the image.
In order to achieve the purpose, the invention provides the following technical scheme:
an eyeball center positioning method comprising:
acquiring coordinates of outsourcing contour feature points of eyes in the image;
obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline characteristic points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image;
calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map composed of the gradient in the x direction of each pixel point and a y direction gradient map composed of the gradient in the y direction of each pixel point;
normalizing each gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image;
performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,j:The gradient map is an x-direction gradient map or a y-direction ladderDegree map, (I, J) ∈ Ω represents traversing each pixel point on the gradient map, gX '(I, J) is the pixel value of pixel point (I, J) of the normalized x-direction gradient map, gY' (I, J) is the pixel value of pixel point (I, J) of the normalized y-direction gradient map;
obtaining Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph;
reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph;
calculating weighted average coordinates of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinates of each pixel point;
and determining the eyeball center coordinate according to the weighted average coordinate.
An eyeball center positioning device comprising:
the first acquisition module is used for acquiring coordinates of outsourcing outline characteristic points of the eyes in the image;
the second acquisition module is used for acquiring a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
a third obtaining module, configured to set a pixel value of each pixel of an eye internal image surrounded by the outsourcing contour feature points in the target image to 255, and set a pixel value of each pixel of an eye external image in the target image to 0, so as to obtain a mask grayscale image;
a fourth obtaining module, configured to calculate an x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and a y-direction gradient gY (i, j) of each pixel point (i, j) in the target image, and obtain an x-direction gradient map composed of the x-direction gradients of each pixel point and a y-direction gradient map composed of the y-direction gradients of each pixel point;
a fifth obtaining module, configured to normalize each x-direction gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
a sixth obtaining module, configured to normalize each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
a seventh obtaining module, configured to invert colors of all pixel points in the target image according to the target image and the mask grayscale image, so as to obtain a target inversion diagram;
a first calculating module, configured to perform the following operations for each pixel point (i, j) in the target inverse image, where the pixel value is nonzero:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,j:The gradient map is an x-direction gradient map or a y-direction gradient map, wherein (I, J) ∈ Ω represents traversing each pixel point on the gradient map, gX '(I, J) is the pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is the pixel value of the pixel point (I, J) of the normalized y-direction gradient map;
an eighth obtaining module, configured to obtain Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
a ninth obtaining module, configured to normalize each pixel value in the result map to a range from 0 to 255, to obtain a normalized result map;
a tenth obtaining module, configured to invert the color of each pixel in the normalized result graph to obtain an inverse result graph;
the second calculation module is used for calculating the weighted average coordinate of each pixel point in the inverse result graph according to the pixel value of each pixel point in the inverse result graph and the coordinate of each pixel point;
and the determining module is used for determining the eyeball center coordinate according to the weighted average coordinate.
An eye center positioning system comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
acquiring coordinates of outsourcing contour feature points of eyes in the image;
obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points;
setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline characteristic points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image;
calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map composed of the gradient in the x direction of each pixel point and a y direction gradient map composed of the gradient in the y direction of each pixel point;
normalizing each gradient in the x-direction gradient map to obtain a normalized x-direction gradient map;
normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map;
reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image;
performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,j:The gradient map is an x-direction gradient map or a y-direction gradient map, wherein (I, J) ∈ Ω represents traversing each pixel point on the gradient map, gX '(I, J) is the pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is the pixel value of the pixel point (I, J) of the normalized y-direction gradient map;
obtaining Sum corresponding to each pixel point of the target inversion mapi,jA result map composed as pixel values;
normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph;
reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph;
calculating weighted average coordinates of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinates of each pixel point;
and determining the eyeball center coordinate according to the weighted average coordinate.
Through the technical scheme, compared with the prior art, the method is implementedIn the eyeball center positioning method, a target image containing eyes is obtained through coordinates of the outsourcing outline characteristic points of the eyes, then a mask gray level image of the target image is obtained, and the color of each pixel point in the target image is reversed according to the target image and the mask gray level image by utilizing the characteristic that the eyeballs are black, so that a target inverse phase diagram is obtained. Calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map and a y direction gradient map; then, normalizing each x-direction gradient in the x-direction gradient map to obtain a normalized x-direction gradient map, and normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map. Then, calculating the unit gradient vector G of the pixel point (i, j)i,jSum of squared dot products Sum of Sum of squared dot products (Sum) with unit position vector of each pixel point (I, J) in gradient map, (gX '(I, J), gY' (I, J))i,jObtaining Sum corresponding to each pixel point of the target inversion mapi,jNormalizing each pixel value in the result graph to be in a range of 0-255 as a result graph consisting of pixel values to obtain a normalized result graph, inverting the color of each pixel point in the normalized result graph to obtain an inverted result graph, calculating weighted average coordinates of each pixel point in the inverted result graph according to the pixel value of each pixel point in the inverted result graph and the coordinates of each pixel point, and determining the eyeball center coordinates according to the weighted average coordinates. Thereby realizing the purpose of accurately determining the center of the eyeball.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of an eyeball center positioning method according to an embodiment of the present disclosure is shown, where the method includes:
step S101: coordinates of the outsourcing contour feature points of the eye in the image are acquired.
There are many ways to obtain the outsourced contour feature points of the eye, such as the asm (active Shape model) method or the neural network method.
Fig. 2 is a schematic diagram of an eye outline feature point provided in an embodiment of the present application.
As shown in fig. 2, the location 21 is the eye's outer contour feature point.
Step S102: and obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points.
The target image is an approximate image including eyes, as shown in fig. 2, and is a target image in an implementation manner in the embodiment of the present application.
Step S103: and setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline feature points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image.
The image of the inside of the eye surrounded by the outer contour feature points can be obtained by utilizing Seal curve drawing.
Fig. 3 shows a gray scale image of a mask provided in an embodiment of the present application.
As can be seen in fig. 3, the mask grayscale image includes an eye-internal image 31, and an eye-external image 32.
Step S104: and calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map formed by the gradient in the x direction of each pixel point and a y direction gradient map formed by the gradient in the y direction of each pixel point.
The calculation method of the x-direction gradient map and the y-direction gradient map can adopt first order differential operators such as Sobel, Prewitt and the like, and the Sobel operator is taken as an example to be explained below, and the x-direction gradient map and the y-direction gradient map can be calculated through the following formulas:
wherein Src (i, j) represents a pixel value of a pixel point (i, j) in the target image,
fig. 4 shows an x-direction gradient chart provided in the embodiments of the present application, and fig. 5 shows a y-direction gradient chart provided in the embodiments of the present application. Fig. 6 shows an amplitude diagram provided in the embodiment of the present application.
The pixel value of each pixel point in the amplitude map is calculated according to the following formula:
magi,jthe pixel value of the pixel point (i, j) in the amplitude map.
Step S105: and normalizing each gradient in the x direction gradient map to obtain a normalized x direction gradient map.
The normalization may be a maximum-minimum normalization, for example, the maximum value corresponds to 1 and the minimum value corresponds to 0.
Step S106: and normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map.
The normalization may be a maximum-minimum normalization, for example, the maximum value corresponds to 1 and the minimum value corresponds to 0.
Step S107: and reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image.
The target inversion diagram can be obtained according to the following formula:
Weighti,j=(255-Srci,j)×Maski,j(255); among them, Weighti,jIs the pixel value, Src, of a pixel point (i, j) in the target inverse mapi,jIs the pixel value of the pixel point (i, j) in the target image, Maski,jAnd (3) the pixel value of the pixel point (i, j) in the mask gray level image.
The characteristic that the eyeball is black is utilized, the color of the target image is inverted, the eyeball becomes white, the gray value of the color of the eyeball is low, the target inverse image of the target image is used as a priori weight, and the y-direction gradient image, the x-direction gradient image and the target inverse image are fully combined, so that the precision of the eyeball center positioning method is greatly improved.
Fig. 7 shows a target inversion diagram provided in the embodiment of the present application.
Comparing fig. 4, 5, 6 and 7, it can be seen that the eyeball changes from gray to white.
Step S108: performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,j:The gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map.
Step S109: obtaining Sum corresponding to each pixel point of the target inversion mapi,jAs a result map of pixel value composition.
Step S110: and normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph.
Step S111: and reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph.
The inversion result graph can be obtained by the following formula:
Sumi',j=255-255×(Sumi,j-minSum)/(maxSum-minSum);
wherein, Sumi',jIs the pixel value, Sum, of a pixel point (i, j) in the inversion result mapi,jAnd the pixel value of the pixel point (i, j) in the result graph is shown, minSum is the minimum pixel value in the result graph, and maxSum is the maximum pixel value in the result graph.
Fig. 8 is a graph showing an inversion result provided in the embodiment of the present application.
As can be seen from fig. 8, the color of the center of the eyeball is significantly different from the color of the other regions.
Step S112: and calculating the weighted average coordinate of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinate of each pixel point.
The weighted average coordinate may be obtained by the following formula:
wherein, Sumi',jThe pixel value of a pixel point (i, j) in the inversion result graph, (Vi, Vj) is the weighted average coordinate, and a function f (Sum'i,j) Is a mapping function for mapping the pixel value of each pixel point of the inversion result graph from 0 to 255 to a preset range, wherein Sum'i,jThe smaller, f (Sum'i,j) The larger.
Optionally, f (x) e-0.01*x。
According to the method, a method for calculating the optimal point by using the maximum value in the prior art is abandoned, and the stability of the eyeball center positioning method is remarkably improved by adopting a weighted average method of coordinate point positions.
Step S113: and determining the eyeball center coordinate according to the weighted average coordinate.
In the embodiment of the present application, in the target image, the mask gray image, the x-direction gradient map, the y-direction gradient map, the normalized x-direction gradient map, the normalized y-direction gradient map, the target inversion map, the result map, the normalized result map, and the inversion result map, the total number of rows and the total number of columns of the pixel points are the same, so that i is a positive integer greater than or equal to 0 and less than M, and j is a positive integer greater than or equal to 0 and less than N.
According to the eyeball center positioning method, the target image containing the eyes is obtained through the coordinates of the outsourcing outline feature points of the eyes, then the mask gray level image of the target image is obtained, and the color of each pixel point in the target image is reversed according to the target image and the mask gray level image by utilizing the characteristic that the eyeballs are black, so that the target inverse phase diagram is obtained. Calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map and a y direction gradient map; then, normalizing each x-direction gradient in the x-direction gradient map to obtain a normalized x-direction gradient map, and normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map. Then, calculating the unit gradient vector G of the pixel point (i, j)i,jSum of squared dot products Sum of Sum of squared dot products (Sum) with unit position vector of each pixel point (I, J) in gradient map, (gX '(I, J), gY' (I, J))i,jObtaining Sum corresponding to each pixel point of the target inversion mapi,jNormalizing each pixel value in the result graph to be in a range of 0-255 as a result graph consisting of pixel values to obtain a normalized result graph, inverting the color of each pixel point in the normalized result graph to obtain an inverted result graph, calculating weighted average coordinates of each pixel point in the inverted result graph according to the pixel value of each pixel point in the inverted result graph and the coordinates of each pixel point, and determining the eyeball center coordinates according to the weighted average coordinates. Thereby realizing the purpose of accurately determining the center of the eyeball.
It can be understood that, the larger the area of the target image and the mask gray-scale image is, the slower the operation speed is, and to increase the operation speed, if the area a of the target image is greater than the area threshold a, the aspect ratio between the target image and the mask gray-scale image is not changed and is reduced to the area a, please refer to fig. 9, which is a schematic flow chart of an implementation manner of obtaining the target image including the eye according to the coordinates of the outsourcing contour feature point in an eyeball center positioning method provided in the embodiment of the present application, the method includes:
step S901: and obtaining a circumscribed rectangle of the outsourcing outline characteristic points.
Step S902: and determining the area surrounded by the circumscribed rectangle as a quasi-target image.
In order to fully include the eye region, a circumscribed rectangle of the outline feature points is taken as a quasi-target image.
Step S903: and judging whether the image area of the quasi-target image is larger than an area threshold value.
Step S904: when the image area a of the quasi-target image is larger than the area threshold A, the length and the width of the quasi-target image are scaled according to a scaling coefficientAnd zooming to obtain the zoomed target image.
Step S905: when the image area a of the quasi-target image is smaller than or equal to the area threshold A, determining the quasi-target image as the target image.
Correspondingly, determining the eyeball center coordinate according to the weighted average coordinate comprises: when the target image is the quasi-target image, determining the weighted average coordinate as the eyeball center coordinate; and when the target image is the image obtained by scaling the quasi-target image, taking the product of the weighted average coordinate and the scaling coefficient rate as the eyeball center coordinate.
It can be understood that iteration can be performed on the eyeball center positioning method, and the larger the iteration number is, the greater the accuracy of eyeball center positioning may be, and certainly, the lower the accuracy of eyeball center positioning may be, so it is important to find a suitable iteration number.
Before step S104, the method for locating an eyeball center further includes: setting the maximum iteration times, and setting the current iteration times as 0; after step S111, the eyeball center positioning method further includes:
adding 1 to the current iteration number; judging whether the current iteration times are more than or equal to the maximum iteration times; when the current iteration number is greater than or equal to the maximum iteration number, executing step S112; and when the current iteration number is smaller than the maximum iteration number, taking the inversion result graph as the target image, and returning to the step S104.
The maximum number of iterations may be a positive integer greater than or equal to 1 and less than or equal to 4, or may be another positive integer, such as 5, 6, etc.
In order to make those skilled in the art more aware of the accuracy of the eyeball center positioning method provided in the embodiments of the present application, the applicant has also conducted experiments.
36000 face images are extracted, tests are carried out by the eyeball center positioning method, when the maximum iteration number NMax is set to be 3 and the area threshold value A is set to be 1000, the average error Meanerror is 0.1217 r, the standard deviation Stderror is 0.1086 r, and r is the radius of the eyeball.
The mean computation time for individual graphs was only 2.57ms measured on a Macbook Pro (Retina,15-inch, Mid 2015), OS X10.11, XCode7.3.
In practical applications, in order to meet the application requirements of the cosmetic pupil, the average error Meanerror of the eyeball center positioning is usually required to be less than 0.15 × r, and the standard deviation Stderror is required to be less than 0.15 × r.
Please refer to fig. 10, which is a schematic structural diagram of an eyeball center positioning device according to an embodiment of the present application, the eyeball center positioning device includes: a first obtaining module 1001, a second obtaining module 1002, a third obtaining module 1003, a fourth obtaining module 1004, a fifth obtaining module 1005, a sixth obtaining module 1006, a seventh obtaining module 1007, a first calculating module 1008, an eighth obtaining module 1009, a ninth obtaining module 1010, a tenth obtaining module 1011, a second calculating module 1012, and a determining module 1013, wherein:
a first obtaining module 1001 configured to obtain coordinates of an outsourcing contour feature point of an eye in an image.
There are many ways to obtain the outsourced contour feature points of the eye, such as the asm (active Shape model) method or the neural network method.
Reference may be made to fig. 2, which is not described in detail here.
A second obtaining module 1002, configured to obtain a target image including an eye according to the coordinates of the outsourcing contour feature point.
A third obtaining module 1003, configured to set a pixel value of each pixel of the eye internal image surrounded by the outline-outsourcing feature point in the target image to 255, and set a pixel value of each pixel of the eye external image in the target image to 0, so as to obtain a mask grayscale image.
The image of the inside of the eye surrounded by the outer contour feature points can be obtained by utilizing Seal curve drawing.
A fourth obtaining module 1004, configured to calculate an x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and a y-direction gradient gY (i, j) of each pixel point (i, j) in the target image, and obtain an x-direction gradient map composed of the x-direction gradients of each pixel point and a y-direction gradient map composed of the y-direction gradients of each pixel point.
The calculation method of the x-direction gradient map and the y-direction gradient map may use first order differential operators such as Sobel and Prewitt, which is described below by taking the Sobel operator as an example, and the fourth obtaining module 1004 may include: a first obtaining unit for calculating an x-direction gradient map and a y-direction gradient map by the following formulas:
wherein Src (i, j) represents a pixel value of a pixel point (i, j) in the target image,
a fifth obtaining module 1005, configured to normalize each x-direction gradient in the x-direction gradient map, to obtain a normalized x-direction gradient map.
A sixth obtaining module 1006, configured to normalize each y-direction gradient in the y-direction gradient map, to obtain a normalized y-direction gradient map.
A seventh obtaining module 1007, configured to invert the color of each pixel in the target image according to the target image and the mask grayscale image, so as to obtain a target inversion diagram.
The seventh obtaining module 1007 may include a second obtaining unit configured to obtain a target inversion map according to the following formula:
Weighti,j=(255-Srci,j)×Maski,j(255); among them, Weighti,jIs the pixel value, Src, of a pixel point (i, j) in the target inverse mapi,jIs the pixel value of the pixel point (i, j) in the target image, Maski,jAnd (3) the pixel value of the pixel point (i, j) in the mask gray level image.
A first calculating module 1008, configured to perform the following operations for each pixel point (i, j) in the target inverse image whose pixel value is non-zero:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,j:The gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map.
An eighth obtaining module 1009, configured to obtain Sum corresponding to each pixel point of the target inversion mapi,jAs a result map of pixel value composition.
A ninth obtaining module 1010, configured to normalize each pixel value in the result map to a range from 0 to 255, and obtain a normalized result map.
A tenth obtaining module 1011, configured to invert the color of each pixel in the normalized result graph, so as to obtain an inverse result graph.
The tenth obtaining module 1011 may include a third obtaining unit for obtaining an inverse result graph by the following formula:
Sumi',j=255-255×(Sumi,j-minSum)/(maxSum-minSum);
wherein, Sumi',jIs the pixel value, Sum, of a pixel point (i, j) in the inversion result mapi,jAnd the pixel value of the pixel point (i, j) in the result graph is shown, minSum is the minimum pixel value in the result graph, and maxSum is the maximum pixel value in the result graph.
A second calculating module 1012, configured to calculate a weighted average coordinate of each pixel point in the inverse result graph according to the pixel value of each pixel point in the inverse result graph and the coordinate of each pixel point.
The second calculation module 1012 may include a fourth obtaining unit for obtaining the weighted average coordinate by the following formula:
wherein, Sumi',jThe pixel value of a pixel point (i, j) in the inversion result graph, (Vi, Vj) is the weighted average coordinate, and a function f (Sum'i,j) Is a mapping function for mapping the pixel value of each pixel point of the inversion result graph from 0 to 255 to a preset range, wherein Sum'i,jThe smaller, f (Sum'i,j) The larger.
A determining module 1013, configured to determine the eyeball center coordinate according to the weighted average coordinate.
In the eyeball center positioning device provided in the embodiment of the present application, the second obtaining module 1002 obtains a target image including eyes through coordinates of the outsourcing contour feature points of the eyes, and then the third obtaining module 1003 obtains a mask gray image of the target image, and by using the characteristic that the eyeballs are black, the seventh obtaining module 1007 inverts the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target inversion diagram. The fourth obtaining module 1004 calculates an x-direction gradient gX (i, j) of each pixel point (i, j) in the target image and a y-direction gradient gY (i, j) of each pixel point (i, j) in the target image to obtain an x-direction gradient map and a y-direction gradient map; then, the fifth obtaining module 1005 normalizes each gradient in the x-direction in the gradient map in the x-direction to obtain a normalized x-direction gradient map, and the sixth obtaining module 1006 normalizes each gradient in the y-direction in the gradient map in the y-direction to obtain a normalized y-direction gradient map. The first calculation module 1008 calculates the unit gradient vector G of the pixel point (i, j)i,jSum of squared dot products Sum of Sum of squared dot products (Sum) with unit position vector of each pixel point (I, J) in gradient map, (gX '(I, J), gY' (I, J))i,jThe eighth obtaining module 1009 obtains Sum corresponding to each pixel point of the target inversion mapi,jThe ninth acquisition module 1010 integrates the junctions as a result map of pixel value compositionThe pixel values in the result graph are normalized to the range of 0 to 255 to obtain a normalized result graph, the tenth obtaining module 1011 inverts the color of each pixel point in the normalized result graph to obtain an inverse result graph, the second calculating module 1012 calculates the weighted average coordinate of each pixel point in the inverse result graph according to the pixel values of each pixel point in the inverse result graph and the coordinates of each pixel point, and the determining module 1013 determines the eyeball center coordinate according to the weighted average coordinate. Thereby realizing the purpose of accurately determining the center of the eyeball.
Please refer to fig. 11, which is a schematic structural diagram of an implementation manner of a second obtaining module in an eyeball center positioning device according to an embodiment of the present application, where the second obtaining module includes: a fifth acquisition unit 1101, a first determination unit 1102, a determination unit 1103, a scaling unit 1104, and a second determination unit 1105, wherein:
a fifth obtaining unit 1101, configured to obtain a circumscribed rectangle of the outsourcing outline feature point.
A first determining unit 1102, configured to determine an area surrounded by the circumscribed rectangle as a quasi-target image.
A first determining unit 1103, configured to determine whether an image area of the quasi-target image is larger than an area threshold.
A scaling unit 1104 for, when the image area a of the quasi-target image is larger than the area threshold A, scaling the length and width of the quasi-target image by a scaling factorAnd zooming to obtain the zoomed target image.
A second determining unit 1105, configured to determine the quasi-target image as the target image when the image area a of the quasi-target image is smaller than or equal to the area threshold a.
Accordingly, the determining module 1013 includes: a third determining unit, configured to determine the weighted average coordinate as the eyeball center coordinate when the target image is the quasi-target image; a fourth determining unit, configured to, when the target image is an image obtained by scaling the quasi-target image, use a product of the weighted average coordinate and a scaling coefficient rate as the eyeball center coordinate.
It can be understood that iteration can be performed on the eyeball center positioning method, and the larger the iteration number is, the greater the accuracy of eyeball center positioning may be, and certainly, the lower the accuracy of eyeball center positioning may be, so it is important to find a suitable iteration number.
The eyeball center positioning device may further include: and the setting module is used for setting the maximum iteration times and setting the current iteration times as 0. The adding module is used for adding 1 to the current iteration times; the judging module is used for judging whether the current iteration times are more than or equal to the maximum iteration times; a first triggering module, configured to trigger the second calculating module 1012 when the current iteration number is greater than or equal to the maximum iteration number; and a second triggering module, configured to, when the current iteration number is smaller than the maximum iteration number, take the inverse result graph as the target image, and trigger the fourth obtaining module 1004.
The embodiment of the present application further provides an eyeball center positioning system, and the eyeball center positioning system includes: a processor and a memory, wherein:
a memory to store the processor-executable instructions.
The processor is configured to:
coordinates of the outsourcing contour feature points of the eye in the image are acquired.
And obtaining a target image containing eyes according to the coordinates of the outsourcing outline characteristic points.
And setting the pixel value of each pixel of the eye internal image surrounded by the outsourcing outline feature points in the target image to be 255, and setting the pixel value of each pixel of the eye external image in the target image to be 0, so as to obtain a mask gray image.
And calculating the gradient gX (i, j) in the x direction of each pixel point (i, j) in the target image and the gradient gY (i, j) in the y direction of each pixel point (i, j) in the target image to obtain an x direction gradient map formed by the gradient in the x direction of each pixel point and a y direction gradient map formed by the gradient in the y direction of each pixel point.
And normalizing each gradient in the x direction gradient map to obtain a normalized x direction gradient map.
And normalizing each y-direction gradient in the y-direction gradient map to obtain a normalized y-direction gradient map.
And reversing the color of each pixel point in the target image according to the target image and the mask gray image to obtain a target reverse phase image.
Performing the following operations for each pixel point (i, j) with non-zero pixel value in the target inverse image:
calculating a unit gradient vector G of the pixel point (i, j)i,j(gX '(I, J), gY' (I, J)), and the unit position vector of each pixel (I, J) in the gradient mapSum of squared dot products Sum of Sumi,j:The gradient map is an x-direction gradient map or a y-direction gradient map, I is a total row number M which is greater than or equal to 0 and smaller than the pixel points in the gradient map, J is a total column number N which is greater than or equal to 0 and smaller than the pixel points in the gradient map, gX '(I, J) is a pixel value of the pixel point (I, J) of the normalized x-direction gradient map, and gY' (I, J) is a pixel value of the pixel point (I, J) of the normalized y-direction gradient map.
Obtaining Sum corresponding to each pixel point of the target inversion mapi,jAs a result map of pixel value composition.
And normalizing each pixel value in the result graph to be in a range of 0-255 to obtain a normalized result graph.
And reversing the color of each pixel point in the normalized result graph to obtain a reversed result graph.
And calculating the weighted average coordinate of each pixel point in the inversion result graph according to the pixel value of each pixel point in the inversion result graph and the coordinate of each pixel point.
And determining the eyeball center coordinate according to the weighted average coordinate.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.