CN109784248A - Pupil positioning method, pupil positioning device, electronic equipment, storage medium - Google Patents

Pupil positioning method, pupil positioning device, electronic equipment, storage medium Download PDF

Info

Publication number
CN109784248A
CN109784248A CN201910002314.8A CN201910002314A CN109784248A CN 109784248 A CN109784248 A CN 109784248A CN 201910002314 A CN201910002314 A CN 201910002314A CN 109784248 A CN109784248 A CN 109784248A
Authority
CN
China
Prior art keywords
region
pupil
candidate
target image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910002314.8A
Other languages
Chinese (zh)
Inventor
孙建康
张�浩
陈丽莉
楚明磊
薛鸿臻
董泽华
王雪丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Optoelectronics Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201910002314.8A priority Critical patent/CN109784248A/en
Publication of CN109784248A publication Critical patent/CN109784248A/en
Pending legal-status Critical Current

Links

Abstract

Present disclose provides a kind of pupil positioning method, pupil positioning device, electronic equipment and computer readable storage mediums, belong to technical field of image processing.This method comprises: obtaining target image;One or more connected regions are extracted from the target image;Determine the radian coefficient of the connected region;Candidate pupil region is chosen from each connected region according to the radian coefficient;Based on the size in each candidate pupil region, objective pupil region is determined from each candidate pupil region.The disclosure realizes the identification and positioning of pupil in image based on simplified algorithm, and treatment effeciency is higher, has general applicability for the image of most scenes, and accuracy is higher.

Description

Pupil positioning method, pupil positioning device, electronic equipment, storage medium
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of pupil positioning method, pupil positioning device, electronics Equipment, computer readable storage medium.
Background technique
Pupil diameter (or pupil identification, human eye positioning etc.) refers to the position for detecting pupil of human in an image or a video It sets, determines the region where pupil.The technology has in fields such as face recognition, virtual reality (VR), electronic security, safe drivings Important application.
Existing pupil positioning method based on machine learning, needs that the feature of pupil is learnt and instructed mostly Practice, the identification and positioning of pupil region are carried out according to the feature learnt.However, this method more depends on existing pupil Characteristic information, generally existing a degree of overfitting problem is especially usually poor to the Pupil diameter effect of " stranger ", And the robustness of Pupil diameter algorithm is poor, causes the accuracy of positioning result lower.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
Present disclose provides a kind of pupil positioning method, pupil positioning device, electronic equipment, computer-readable storage mediums Matter, and then overcome the problems, such as that existing pupil positioning method accuracy is lower at least to a certain extent.
Other characteristics and advantages of the disclosure will be apparent from by the following detailed description, or partially by the disclosure Practice and acquistion.
According to one aspect of the disclosure, a kind of pupil positioning method is provided, comprising: obtain target image;From the mesh One or more connected regions are extracted in logo image;Determine the radian coefficient of the connected region;According to the radian coefficient from Candidate pupil region is chosen in each connected region;Based on the size in each candidate pupil region, from each candidate pupil Objective pupil region is determined in bore region.
In a kind of exemplary embodiment of the disclosure, one or more connected region packets are extracted from the target image It includes: one or more connected regions is extracted from the target image by default sliding window.
In a kind of exemplary embodiment of the disclosure, the size based on each candidate pupil region, from each institute It states and determines that objective pupil region includes: to calculate each candidate pupil region in the default sliding window in candidate pupil region In area ratio;The area ratio and the immediate candidate pupil region of standard proportional are determined as the objective pupil area Domain.
It is described to be extracted from the target image by default sliding window in a kind of exemplary embodiment of the disclosure One or more connected regions include: to traverse the target image using the default sliding window, obtain the target image Multiple regional areas;One or more connected regions are extracted in each regional area;It is described according to the radian coefficient It include: by the maximum connected region of radian coefficient in each regional area from candidate pupil region is chosen in each connected region Domain is determined as the candidate pupil region.
It is described that one or more connected regions are extracted from the target image in a kind of exemplary embodiment of the disclosure Domain includes: that one or more candidate connected regions are extracted from the target image;For any candidate connected region Ai, Obtain candidate connected region AiBoundary rectangle Di, and it is calculated by the following formula candidate connected region AiForm factor Q (Ai):Wherein aiFor boundary rectangle DiLong side length, biFor boundary rectangle DiBond length;By the shape The candidate connected region that shape coefficient meets preset condition is determined as the connected region.
In a kind of exemplary embodiment of the disclosure, the radian coefficient of the determination connected region include: for Any connected region Bj, obtain connected region BjBoundary rectangle Dj;Obtain connected region BjArea S (Bj);By with Lower formula calculates connected region BjRadian coefficients R (Bj):Wherein, ajFor boundary rectangle DjLong side length Degree, bjFor boundary rectangle DjBond length.
In a kind of exemplary embodiment of the disclosure, the boundary rectangle includes external rotation rectangle.
It is described that objective pupil area is determined from each candidate pupil region in a kind of exemplary embodiment of the disclosure After domain, the method also includes: the mass center in the objective pupil region is determined as pupil center.
In a kind of exemplary embodiment of the disclosure, after the acquisition target image, the method also includes: to described Target image is pre-processed.
In a kind of exemplary embodiment of the disclosure, the pretreatment is included any of the following or a variety of: gray processing Processing, gaussian filtering, binary conversion treatment and Morphological scale-space.
In a kind of exemplary embodiment of the disclosure, the method is applied to virtual reality device;The acquisition target Image includes: using the eyes image of the image capture module acquisition user of the virtual reality device, by the eyes image It is determined as the target image.
In a kind of exemplary embodiment of the disclosure, described image acquisition module includes infrared imaging unit, the eye Portion's image is infrared image.
According to one aspect of the disclosure, a kind of pupil positioning device is provided, comprising: target image obtains module, is used for Obtain target image;Connected region extraction module, for extracting one or more connected regions from the target image;Radian Coefficient determination module, for determining the radian coefficient of the connected region;Module is chosen in candidate region, for according to the radian Coefficient chooses candidate pupil region from each connected region;Target area determining module, for based on each candidate pupil The size of bore region determines objective pupil region from each candidate pupil region.
According to one aspect of the disclosure, a kind of electronic equipment is provided, comprising: processor;And memory, for storing The executable instruction of the processor;Wherein, the processor is configured to above-mentioned to execute via the executable instruction is executed Method described in any one.
According to one aspect of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with, The computer program realizes method described in above-mentioned any one when being executed by processor.
The exemplary embodiment of the disclosure has the advantages that
Connected region is extracted from target image, determines the radian coefficient of connected region, and according to radian coefficient from each company Candidate pupil region is chosen in logical region, therefrom determines objective pupil region further according to the size in each candidate pupil region.One side Face characterizes the degree of closeness of connected region and true pupil by radian coefficient and the Two indices of size, for most fields Pupil of human in the image of scape has general applicability, and accuracy is higher.On the other hand, machine is relied on compared with the existing technology The mode of device learning model processing, the present exemplary embodiment are not necessarily to learn a large amount of pupil feature in advance, independent of The process of sample data and training process, entire method is simple, and the requirement for hardware resource is lower, processing effect with higher Rate.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.It should be evident that the accompanying drawings in the following description is only the disclosure Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 shows a kind of flow chart of pupil positioning method in the present exemplary embodiment;
Fig. 2 shows the flow diagrams for extracting connected region a kind of in the present exemplary embodiment;
Fig. 3 shows a kind of schematic diagram of default sliding window in the present exemplary embodiment;
Fig. 4 shows a kind of schematic diagram using default sliding window traversal target image in the present exemplary embodiment;
Fig. 5 shows a kind of schematic diagram of gaussian filtering process in the present exemplary embodiment;
Fig. 6 shows the flow chart of another pupil positioning method in the present exemplary embodiment;
Fig. 7 shows a kind of structural block diagram of pupil positioning device in the present exemplary embodiment;
Fig. 8 shows a kind of structural block diagram of image capture module in pupil positioning device in the present exemplary embodiment;
Fig. 9 shows a kind of electronic equipment for realizing the above method in the present exemplary embodiment;
Figure 10 shows a kind of computer readable storage medium for realizing the above method in the present exemplary embodiment.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will more Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot Structure or characteristic can be incorporated in any suitable manner in one or more embodiments.
The exemplary embodiment of the disclosure provides firstly a kind of pupil positioning method, can be applied to computer, or The VR equipment, electronic security equipment, safe driving apparatus system etc. for having image processing function.Refering to what is shown in Fig. 1, this method can To include the following steps S110~S150:
Step S110 obtains target image.
Wherein, target image is the image for needing to carry out Pupil diameter, can be still image, is also possible to dynamic image (such as video, cardon), dynamic image is usually made of the still image of successive frame, can be to each of these frame or partial frame Still image carry out the present exemplary embodiment Pupil diameter.Target image can be color image (such as figure of RGB mode Picture), gray level image, black white image (or bianry image), the various modes such as infrared image image.In different executing subjects On, step S110 can have different implementations, such as can input target image taking human as to computer, can also pass through VR The image capture module (such as camera) of equipment or electronic security equipment acquires real-time target image, can also specify target figure The storage of picture or link path make computer or other equipment obtain target image etc. by the path.
The above is only exemplary illustration, the disclosure for target image concrete form and obtain target image it is specific Mode is not specially limited.
Step S120 extracts one or more connected regions from target image.
Wherein, connected region refer to identical (or similar) and position it is adjacent pixel composition image-region.One company Logical region characterizes a physical objects usually in terms of picture material, and target image may be extracted comprising one or more physical objects Connected region, which is equivalent to, is split as each discrete physical objects textures for target image.
In one exemplary embodiment, step S120 may comprise steps of:
One or more connected regions are extracted from target image by default sliding window
Wherein, default sliding window refers to the window with parameters such as preset size, sliding steps, in target image Middle sliding, the regional area of coverage goal image, is equivalent to from target image and has chosen local area image.It can be in part Connected region is extracted in area image, to realize to the finer extraction of connected region.
Exemplary illustration is done to the specific embodiment for extracting connected region below.
In one exemplary embodiment, refering to what is shown in Fig. 2, can by detail below the step of extract connected region:
(1), with reference to Fig. 2 (a), in target image 210, local area image is obtained using default sliding window 211.
(2), binary conversion treatment is carried out to the local area image in Fig. 2 (a), obtains the bianry image 212 in Fig. 2 (b); The key of binary conversion treatment is that threshold value, the present exemplary embodiment do not do special limit for the specific algorithm of threshold value It is fixed, such as maximum between-cluster variance (OTSU) algorithm, average gray algorithm, Two-peak method selected threshold, maximum entropy threshold can be used Value method etc..
(3), Morphological scale-space is carried out to the binary image 212 in Fig. 2 (b);Morphological scale-space refers to be mentioned from image The picture content significant to expression and description region shape is taken, subsequent identification work is enable to catch shape important in image Shape feature.In the present example embodiment, Morphological scale-space can remove the interference pixel in binary image 212, such as Fig. 2 (b) interference region 213,214,215 in, obtains the binary image 216 of Fig. 2 (c) after removal;Morphological scale-space can pass through The specific methods such as corrosion treatment, expansion process, opening operation processing or closed operation processing are realized, such as arrive Fig. 2's (c) in Fig. 2 (b) In Morphological scale-space, the specific expansion process using the corrosion treatment of 3*3 twice and 3*3 twice;Certainly, the disclosure to this not It is particularly limited to.
(4), in the binary image 216 in Fig. 2 (c), the pixel value of all pixels point is 0 (black) or 1 (white), Adjacent and identical pixel is connected into connected region 217,218.
In one exemplary embodiment, connected region can also be extracted by step in detail below:
(1), in the local area image of default sliding window covering, a pixel A0 is chosen first, can be located at figure Inconocenter position can also be located at image corner location, may be located on any other position.
(2), it using A0 as sub-pixel, is put into set A, then scans the neighbor pixel of A0, calculate itself and A0's Pixel value difference, if difference be less than specific threshold value, will neighbor pixel be added set A in, can sequentially number be A2, A3, A4 etc..
(3), successively with pixel A2, A3, A4 in A etc. for sub-pixel, repeat step (2), until can not again will times Set A is added in what pixel, then whole pixels in A form connected region A.
(4), for other pixels in local area image, repeat the above steps (1)~(3), until all pictures Vegetarian refreshments is all divided into different set, and whole pixels in each set form a connected region.
Step S130 determines the radian coefficient of connected region.
Connected region may be a variety of different shapes, and higher for the probability of irregular shape, and radian coefficient is characterization Connected region close to arc degree parameter, can be illustrated below there are many specific calculation method.
In one exemplary embodiment, step S130 can with specifically includes the following steps:
For any connected region Bj, determine connected region BjCentral pixel point C (Bj);
Calculate central pixel point C (Bj) apart from connected region BjBoundary maximum distance Lmax(Bj) and minimum range Lmin (Bj);
With C (Bj) it is the center of circle, with Lmax(Bj) it is major axis radius, Lmin(Bj) it is minor axis radius, draw out connected region Bj's Approximate ellipse Ej
Statistics is located at BjWithin and be located at EjWithin pixel quantity Count (j);
Connected region B is calculated by following formula (1)jRadian coefficients R (Bj):
Wherein, Count (Bj) it is connected region BjWhole pixel quantities, Count (Ej) it is approximate ellipse EjWhole Pixel quantity.
In other words, the approximate ellipse that can determine connected region makees connected region with the degree that is overlapped of approximate ellipse For radian coefficient, radian coefficient is maximum when the two is completely coincident, and is 1.
In one exemplary embodiment, step S130 can also with specifically includes the following steps:
For any connected region Bj, obtain connected region BjBoundary rectangle Dj
Obtain connected region BjArea S (Bj);
Connected region B is calculated by following formula (2)jRadian coefficients R (Bj):
Wherein, ajFor boundary rectangle DjLong side length, bjFor boundary rectangle DjBond length, π aj·bjIt is with aj For major axis radius, bjFor the elliptical area of minor axis radius, which is boundary rectangle DjInner ellipse, formula (2) calculate connect The area ratio in logical region and the inner ellipse, in this, as radian coefficient.
In other words, it can determine that its inner ellipse is the close of connected region using the boundary rectangle of connected region as reference Like ellipse, and by the similarity degree of both area ratio characterizations, as final radian coefficient.Determining the external of connected region It can be referring to being drawn, such as with the trunnion axis in target image with pre-determined rectangular axes be X when rectangle Axis, vertical axes are Y-axis, draw out the boundary rectangle that two sides are respectively parallel to X-axis and Y-axis, boundary rectangle can also be made opposite In X-axis and Y-axis any rotation.
Further, in one exemplary embodiment, above-mentioned boundary rectangle DjIt can be connected region BjExternal spin moment Shape, external rotation rectangle refer to connected region BjAll boundary rectangles in the smallest boundary rectangle of area, namely and connected region Domain BjThe highest boundary rectangle of registration.In the present example embodiment, boundary rectangle and its inner ellipse are for approximate characterization The shape of connected region, in all boundary rectangles of connected region, external rotation rectangle and connected region registration highest, shape Shape is most close, therefore more accurate come the shape for characterizing connected region by external rotation rectangle and its inner ellipse.
It should be noted that formula (1) is the radian coefficient calculated by different methods from formula (2), two kinds of sides The concrete meaning of radian coefficient has certain difference, therefore the radian of formula (1) the radian coefficient calculated and formula (2) calculating in method Do not have comparativity between coefficient.In addition, other parameters for being used to characterize connected region close to the degree of arc can also be made For the radian coefficient in step S130, and the calculation method of radian coefficient be not limited to the above embodiments shown in method, the disclosure This is not specially limited.
Step S140 chooses candidate pupil region according to radian coefficient from each connected region.
True pupil is generally circular in cross section or oval.Radian coefficient characterize connected region close to arc degree, Connected region can be characterized to a certain extent close to the degree of true pupil shape, therefore it can be according to the arc of connected region Degree coefficient filters out one or more regions close to true pupil from connected region, as candidate pupil region.Usually Preset the condition for choosing candidate pupil region, for example, can be set the threshold value about radian coefficient, for example, 0.7 or 0.8 etc., the connected region that radian coefficient reaches the threshold value is determined as candidate pupil region;It can also be according to radian coefficient from height Sequence on earth is ranked up connected region, and choosing wherein radian coefficient, highest several are (such as 5 or 10) or specific The connected region of ratio (such as 10% or 20%) is candidate pupil region;If there is also other similar elliptical in target image Physical objects then may not be that radian coefficient is higher closer to true pupil, be can be set in the case about radian coefficient Numberical range (such as 0.7~0.9), the connected region that radian coefficient is in the numberical range is determined as candidate pupil region. The disclosure is not specially limited for choosing the concrete mode of candidate via regions.You need to add is that can in step S140 The condition that the radian coefficient of the whole connected regions of energy is all not up to arranged, then the quantity for choosing candidate pupil region is 0, illustrates mesh It is likely to that pupil of human is not present in logo image.
Step S150 determines objective pupil region based on the size in each candidate pupil region from each candidate pupil region.
The size for the candidate via regions chosen in step S140 is not usually identical, can be carried out according to its size into one Step screening, is determined closest to the region of true pupil, i.e. objective pupil region.Wherein, the size in step S150 can be with It is the absolute dimension in candidate pupil region, such as its length and width, radius, area etc., is also possible to candidate pupil region in target figure Relative size as in, such as length and width, radius, the area etc. that are indicated with pixel quantity, can according to the scale bar of target image To be further converted into absolute dimension.
The size in each candidate pupil region is compared with predetermined standard pupil size, by immediate one Or multiple candidate pupil regions are determined as objective pupil region, it is contemplated that there may be multiple pupils (such as to wrap in target image Containing the eyes of people or multiple faces), the quantity in objective pupil region is not limited to one, is also possible to two or more.
In one exemplary embodiment, standard pupil size can be determined by method in detail below:
(1), n people under identical photographical condition (such as identical focal length, picture size, pixel number etc.) is used at random N*m eyes images (everyone acquires m).
(2), the major axis radius of oval pupil in every eyes image is measuredAnd minor axis radiusWhereinIt is j-th The major axis radius of people's kth pupil,It is the minor axis radius of j-th of people's kth pupil, j ∈ [1, n], k ∈ [1, m];
(3), the pupil major axis radius of n people and the mean value of minor axis radius are calculated, obtain the major axis radius of standard pupil with Minor axis radius STlongAnd STshort, in which:
The major axis radius in each candidate pupil region then can detecte in step S150 based on above-mentioned standard pupil size With longest line segment in minor axis radius, such as candidate pupil region and the half of line of shortest length section or the approximation in candidate pupil region Elliptical major axis radius and minor axis radius etc., by the major axis radius in each candidate pupil region and minor axis radius respectively with STlongWith STshortIt is compared, when similarity is higher or diversity factor is lower, candidate pupil region is objective pupil region.
In one exemplary embodiment, it can also use and be closed in medical data base, encyclopaedia data or other specialized databases In the standard size data of pupil of human, the pixel quantity being converted into target image, in order to each candidate pupil The relative size in region is compared.
In an exemplary embodiment, in the case where extracting connected region by default sliding window, step S150 can be with It is realized especially by following steps:
Calculate area ratio of each candidate pupil region in default sliding window;
Above-mentioned area ratio and the immediate candidate pupil region of standard proportional are determined as objective pupil region.
In other words, above-mentioned standard pupil size can be converted to the form of the standard proportional in default sliding window. For example, refering to what is shown in Fig. 3, standard pupil is that the approximate black that radius is r is round, default sliding window is that side length is 4r White square region, wherein the value of r is the radius pixel number of standard pupil in the target image, then can calculate and be used for The standard proportional that pupil determines:
It is subsequent for candidate pupil region Ci, calculate the area ratio of itself and default sliding window T:
Calculating S (Ci) when, it can be according to CiThe pixel quantity of filling determines its area, can also calculate CiApproximation Round or approximate ellipse area, as CiArea.Finally by PiIt is compared with P, immediate candidate pupil region is target Pupil region.
In one exemplary embodiment, default sliding window can have different sizes, then for various sizes of default Candidate via regions in sliding window, used standard proportional is also different when reference area ratio, standard proportional according to The size of default sliding window is specifically calculated.
Based on above description, in the present example embodiment, connected region is extracted from target image, determine connected region Radian coefficient, and candidate pupil region is chosen from each connected region according to radian coefficient, further according to each candidate pupil region Size therefrom determine objective pupil region.On the one hand, by radian coefficient and the Two indices of size characterization connected region and very The degree of closeness of real pupil, for the pupil of human in the image of most scenes have general applicability, and accuracy compared with It is high.On the other hand, the mode of machine learning model processing is relied on compared with the existing technology, and the present exemplary embodiment is not necessarily to a large amount of Pupil feature learnt in advance, independent of sample data and training process, the process of entire method is simple, for hardware The requirement of resource is lower, treatment effeciency with higher.
In an exemplary embodiment, step S120 can with specifically includes the following steps:
Target image is traversed using default sliding window, obtains multiple regional areas of target image;
One or more connected regions are extracted in each regional area;
Correspondingly, step S140 can be realized by following steps:
The maximum connected region of radian coefficient in each regional area is determined as candidate pupil region.
Wherein, when traversing target image, can with after refering to what is shown in Fig. 4, setting the size and step-length of default sliding window, Usually target image can be scanned according to direction " line by line " from left to right, from top to bottom, naturally it is also possible to according to other directions It is traversed.Particularly, the multiple groups size and step-length that default sliding window can be set, in the default sliding window using size 1 After mouth completes traversal according to step-length 1, the default sliding window of size 2 can be recycled according to step-length 2 to target image progress time It goes through, recycles the default sliding window of size 3 traverse according to step-length 3 etc., may thereby determine that out various different sizes Regional area, and extract various sizes of connected region, realize the abundant excavation to target image content.
The default every movement of sliding window is primary, determines a regional area, may extract multiple companies in the regional area Logical region.In the present example embodiment, a candidate via regions can be only chosen in the connected region of each regional area, Specifically, the maximum connected region of radian coefficient can be determined as candidate pupil in the connected region of each regional area Bore region passes through upper since radian coefficient characterizes connected region close to the level of true pupil to a certain extent The method of stating can select the candidate pupil region in each regional area closest to true pupil, can be further improved pupil The accuracy of hole localization method.
In an exemplary embodiment, step S120 can also be realized by step in detail below:
One or more candidate connected regions are extracted from target image;
For any candidate connected region Ai, obtain candidate connected region AiBoundary rectangle Di, and pass through following formula meter Calculate candidate connected region AiForm factor Q (Ai):
The candidate connected region that form factor meets preset condition is determined as connected region.
Wherein, aiFor boundary rectangle DiLong side length, biFor boundary rectangle DiBond length;The meaning of the above method It is: when extracting connected region by conventional method, the color distinction between neighbor pixel is only considered, by color is identical or phase Close neighbor pixel joins together connected region;And in the present example embodiment, the connected region extracted by conventional method Domain is candidate connected region, is screened on this basis to the shape of candidate connected region, the ellipse of shape and pupil compared with It is connected region for similar candidate connected region, and the subsequent calculating radian coefficient of further progress and etc., after can simplify The treating capacity of continuous step, and further increase the accuracy of Pupil diameter.
When the shape to candidate connected region is screened, candidate connected region is approximately represented as its boundary rectangle Inner ellipse, the ratio between the elliptical short long axis is indicated by form factor, and then characterize the shape of candidate connected region.Shape The candidate connected region that coefficient meets preset condition is connected region.
In one exemplary embodiment, preset condition can be the specific threshold or special value model about form factor It encloses, such as the ratio between short long axis of empirically determined pupil should be at [0.7,0.9], then formula (7) can be calculated Form factor be in the candidate connected region of [0.7,0.9] as connected region.
In one exemplary embodiment, the major axis radius and minor axis radius of the standard pupil calculated based on formula (3) and (4) STlongAnd STshort, the preset condition about form factor can be determined according to following methods:
Calculate standard shape coefficient S Tshape:
WhereinWithMeaning be referred to the descriptions of formula (3) and (4).Calculating STshapeIt afterwards, can be with Preset condition is set as STshapeNeighbouring numberical range, such as STshape± 0.2, [0.8*STshape,1.2*STshape] etc. Deng.
It should be noted that passing through boundary rectangle D in the above exemplary embodimentsiBond length and long side length it Than calculating form factor, it should be understood that boundary rectangle D can also be passed throughiLong side length and the ratio between bond length calculate shape Coefficient, then corresponding preset condition can be changed to the inverse of above-mentioned preset condition, for example, [0.7,0.9] be changed to [1.1, 1.4]。
Further, in one exemplary embodiment, above-mentioned boundary rectangle DiIt can be connected region AiExternal spin moment Shape.The meaning and its meaning of external rotation rectangle are as previously described.In the present example embodiment, by external rotation rectangle come table The shape of connected region is levied, there is higher accuracy.
In one exemplary embodiment, after step s 150, pupil positioning method can further include following steps:
The mass center in objective pupil region is determined as pupil center.
Wherein, for objective pupil region, mass center, center of gravity and geometric center are usually the same point.Determine pupil Behind center, be more advantageous to it is subsequent to pupil carry out tracking or action recognition.
In one exemplary embodiment, the method for seeking the mass center in objective pupil region is as follows:
Wherein (xi,yi) be objective pupil region in any pixel, g be objective pupil region pixel sum, (x0,y0) be mass center coordinate, i.e., being averaged for abscissa and ordinate is taken respectively for all pixels point in objective pupil region Value, as the coordinate of mass center, so that it is determined that pupil center.
In one exemplary embodiment, the position of its mass center can also be determined based on the ellipse fitting to objective pupil region It sets, detailed process is as follows: objective pupil regional edge is carried out using Canny edge detection algorithm (a kind of multistage edge detection algorithm) The extraction of edge point, obtains several marginal points;The ellipse fitting that marginal point is carried out using least square method, the ellipse after being fitted General equation, as follows:
a·x2+b·x·y+c·y2+ dx+ey+1=0; (11)
Mass center (x is obtained according to oval general equationc,yc), whereinTo really Determine pupil center.
In one exemplary embodiment, after step S110, pupil positioning method can with the following steps are included:
Target image is pre-processed.
Wherein, it pre-processes for making target image be easier to carry out extraction and the Pupil diameter of connected region.In an example Property embodiment in, pretreatment may include it is following any one or more: gray processing processing, gaussian filtering, binary conversion treatment with Morphological scale-space.Gray processing processing refers to that the image by color image or other color modes is converted to gray level image, with gray scale Value characterization pixel.Gaussian filtering can carry out the noise of target image under conditions of retaining target image minutia as far as possible Inhibit, to improve the validity and reliability of subsequent image processing and analysis, illustratively, refering to what is shown in Fig. 5, to Fig. 5's (a) Image carries out the gaussian filtering of 5*5, obtains the image in Fig. 5 (b).Target image can be converted to binaryzation by binary conversion treatment Image in the present exemplary embodiment, can carry out binary conversion treatment to entire target image, can also be true to default sliding window Fixed regional area carries out binary conversion treatment.Morphological scale-space refers to that extraction is intentional to expression and description region shape from image The picture content of justice, can remove the interference pixel in image, in the present exemplary embodiment, can carry out to entire target image Morphological scale-space can also carry out Morphological scale-space to the regional area that default sliding window determines.
In addition, can also include the pre- of the routine such as down-sampling, image sharpening, color saturation to the pretreatment of target image Processing method.
Fig. 6 shows the process of another pupil positioning method in the present exemplary embodiment.Refering to what is shown in Fig. 6, in step After obtaining target image in S610, target image is pre-processed in step S620, then by pre- in step S630 If sliding window traverses target image, the regional area that default sliding window is covered is obtained in step S640.In step The connected region in regional area is extracted in S650, which can specifically include step S651~S653: in step S651, from Candidate connected region is extracted in regional area;In step S652, the form factor of each candidate connected region is calculated;Step S653, The candidate connected region that form factor meets preset condition is determined as connected region.After extracting connected region, step is carried out S660 to calculate the radian coefficient of connected region and choose candidate pupil region, the process can specifically include step S661~ S663: in step S661, the boundary rectangle of each connected region is obtained;In step S662, the area of each connected region is calculated;Step In S663, radian coefficient is calculated by connected region and its area ratio of boundary rectangle;It can be from connected region according to radian coefficient Choosing candidate pupil region in domain in the present example embodiment can be by the maximum connection of radian coefficient in each regional area Region is determined as candidate pupil region;Step S670 is carried out again, calculates the corresponding default sliding window in each candidate pupil region The area ratio and step S680 of mouth, are determined as objective pupil region for the candidate pupil region wherein closest to standard proportional. Step S690 can be finally carried out, pupil center is determined from objective pupil region.To complete the complete procedure of Pupil diameter.
In one exemplary embodiment, can by pupil positioning method be applied to virtual reality device, then can by with Lower step obtains target image:
Utilize the eyes image of the image capture module acquisition user of virtual reality device.
Wherein, eyes image, that is, target image of user.Camera can be set in virtual reality glasses, when user wears When wearing virtual reality glasses, the eyes image of user is shot by camera.
Further, in one exemplary embodiment, above-mentioned image capture module may include infrared imaging unit, above-mentioned Eyes image can be infrared image.
Wherein, infrared image can be near-infrared image, middle infrared image or far infrared image, be conducive in eyes image Middle differentiation pupil and iris region, improve the accuracy of Pupil diameter.For example, infrared imaging unit may include infrared It is infrared that one or more infrared LED lamps can be arranged in LED light (light emitting diode) and infrared camera around virtual reality lens It is set as to the wavelength property of can be exemplified of LED light 850nm or so (near infrared region), is arranged below virtual reality lens infrared Camera, eyes image of shooting user when using virtual reality device, is set as to the frame speed property of can be exemplified of infrared camera 120 frames/and it is more than the second, can be with the higher eyes image of acquisition quality, and have preferable photonasty for the near infrared light of 850nm. As it can be seen that the parameter of infrared imaging unit can be selected specifically according to actual needs, the disclosure is not specially limited this.
The exemplary embodiment of the disclosure additionally provides a kind of pupil positioning device, refering to what is shown in Fig. 7, the device 700 can To include: that target image obtains module 710, for obtaining target image;Connected region extraction module 730 is used for from target figure One or more connected regions are extracted as in;Radian coefficient determination module 740, for determining the radian coefficient of connected region;It waits Favored area chooses module 750, for choosing candidate pupil region from each connected region according to radian coefficient;Target area determines Module 760 determines objective pupil region for the size based on each candidate pupil region from each candidate pupil region.
In one exemplary embodiment, connected region extraction module can be used for by presetting sliding window from target image The one or more connected regions of middle extraction.
In one exemplary embodiment, target area determining module 760 may include: area ratio computing unit 761, use The area ratio in sliding window is being preset in calculating each candidate pupil region;Standard proportional comparing unit 762 is used for area Ratio and the immediate candidate pupil region of standard proportional are determined as objective pupil region.
In one exemplary embodiment, connected region extraction module 730 can be used for traversing mesh using default sliding window Logo image obtains multiple regional areas of target image, and one or more connected regions are extracted in each regional area;Phase It answers, module 750 is chosen in candidate region can be used for being determined as waiting by the maximum connected region of radian coefficient in each regional area Select pupil region.
In one exemplary embodiment, connected region extraction module 730 may include: candidate connected region extraction unit 731, for extracting one or more candidate connected regions from target image;Form factor computing unit 732, for for appointing One candidate connected region Ai, obtain candidate connected region AiBoundary rectangle Di, and it is calculated by the following formula candidate connected region AiForm factor Q (Ai):Wherein aiFor boundary rectangle DiLong side length, biFor boundary rectangle DiShort side Length;Preset condition screening unit 733, the candidate connected region for form factor to be met preset condition are determined as connected region Domain.
In one exemplary embodiment, radian coefficient determination module 740 may include: boundary rectangle acquiring unit 741, use In for any connected region Bj, obtain connected region BjBoundary rectangle Dj;Connected region area acquiring unit 742, for obtaining Take connected region BjArea S (Bj);Radian coefficient calculation unit 743, for being calculated by the following formula connected region BjArc Spend coefficients R (Bj):Wherein, ajFor boundary rectangle DjLong side length, bjFor boundary rectangle DjShort side it is long Degree.
In one exemplary embodiment, above-mentioned boundary rectangle can be external rotation rectangle.
In one exemplary embodiment, pupil positioning device 700 can also include: pupil center's determining module 770, be used for The mass center in objective pupil region is determined as pupil center.
In one exemplary embodiment, pupil positioning device 700 can also include: image pre-processing module 720, be used for After target image acquisition module gets target image, target image is pre-processed.
In one exemplary embodiment, above-mentioned pretreatment may include it is following any one or more: it is gray processing processing, high This filtering, binary conversion treatment and Morphological scale-space.
In one exemplary embodiment, pupil positioning device can be applied to virtual reality device;Wherein, target image obtains Modulus block can be the image capture module of virtual reality device, for acquiring the eyes image of user, as target image.
In one exemplary embodiment, refering to what is shown in Fig. 8, above-mentioned image capture module is the image capture module in Fig. 8 800, it may include infrared imaging unit 810 and other necessary component units 820, then the above-mentioned eye as target image Portion's image can be infrared image.
In one exemplary embodiment, infrared imaging unit 810 may include that infrared projection units 811 are obtained with infrared image Unit 812 is taken, wherein infrared projection units 811 are used for the infrared light of projection imaging, such as can be infrared LED lamp, infrared Image acquisition unit 812 is used to acquire the infrared image of user's eye, such as can be infrared camera.
The detail of above-mentioned each module/unit has carried out detailed retouch in corresponding method section Example It states, therefore repeats no more.
The exemplary embodiment of the disclosure additionally provides a kind of electronic equipment that can be realized the above method.
Person of ordinary skill in the field it is understood that various aspects of the disclosure can be implemented as system, method or Program product.Therefore, various aspects of the disclosure can be with specific implementation is as follows, it may be assumed that complete hardware embodiment, complete The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.) or hardware and software, can unite here Referred to as circuit, " module " or " system ".
The electronic equipment 900 of this exemplary embodiment according to the disclosure is described referring to Fig. 9.What Fig. 9 was shown Electronic equipment 900 is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in figure 9, electronic equipment 900 is showed in the form of universal computing device.The component of electronic equipment 900 can wrap It includes but is not limited to: at least one above-mentioned processing unit 910, at least one above-mentioned storage unit 920, the different system components of connection The bus 930 of (including storage unit 920 and processing unit 910), display unit 940.
Wherein, storage unit is stored with program code, and program code can be executed with unit 910 processed, so that processing is single Member 910 executes the step described in above-mentioned " illustrative methods " part of this specification according to the various illustrative embodiments of the disclosure Suddenly.For example, processing unit 910 can execute step S110~S150 shown in FIG. 1, step shown in fig. 6 can also be executed S610~S690 etc..
Storage unit 920 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit (RAM) 921 and/or cache memory unit 922, it can further include read-only memory unit (ROM) 923.
Storage unit 920 can also include program/utility 924 with one group of (at least one) program module 925, Such program module 925 includes but is not limited to: operating system, one or more application program, other program modules and It may include the realization of network environment in program data, each of these examples or certain combination.
Bus 930 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures Local bus.
Electronic equipment 900 can also be with one or more external equipments 1100 (such as keyboard, sensing equipment, bluetooth equipment Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 900 communicate, and/or with make Any equipment (such as the router, modulation /demodulation that the electronic equipment 900 can be communicated with one or more of the other calculating equipment Device etc.) communication.This communication can be carried out by input/output (I/O) interface 950.Also, electronic equipment 900 can be with By network adapter 960 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, Such as internet) communication.As shown, network adapter 960 is communicated by bus 930 with other modules of electronic equipment 900. It should be understood that although not shown in the drawings, other hardware and/or software module can not used in conjunction with electronic equipment 900, including but not Be limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and Data backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server, terminal installation or network equipment etc.) is executed according to the exemplary implementation of the disclosure The method of example.
The exemplary embodiment of the disclosure additionally provides a kind of computer readable storage medium, and being stored thereon with can be realized The program product of this specification above method.In some possible embodiments, various aspects of the disclosure can also be realized For a kind of form of program product comprising program code, when program product is run on the terminal device, program code is used for Execute terminal device described in above-mentioned " illustrative methods " part of this specification according to the various exemplary embodiment party of the disclosure The step of formula.
It is produced refering to what is shown in Fig. 10, describing the program according to the exemplary embodiment of the disclosure for realizing the above method Product 1000, can be using portable compact disc read only memory (CD-ROM) and including program code, and can set in terminal It is standby, such as run on PC.However, the program product of the disclosure is without being limited thereto, in this document, readable storage medium storing program for executing can With to be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or Person is in connection.
Program product can be using any combination of one or more readable mediums.Readable medium can be readable signal Jie Matter or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or partly lead System, device or the device of body, or any above combination.More specific example (the non exhaustive column of readable storage medium storing program for executing Table) it include: the electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only storage Device (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal, Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing Matter, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or and its The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have Line, optical cable, RF etc. or above-mentioned any appropriate combination.
Can with any combination of one or more programming languages come write for execute the disclosure operation program Code, programming language include object oriented program language-Java, C++ etc., further include conventional process Formula programming language-such as " C " language or similar programming language.Program code can be calculated fully in user It executes in equipment, partly execute on a user device, executing, as an independent software package partially in user calculating equipment Upper part executes on a remote computing or executes in remote computing device or server completely.It is being related to remotely counting In the situation for calculating equipment, remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network (WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP To be connected by internet).
In addition, above-mentioned attached drawing is only the schematic theory of the processing according to included by the method for disclosure exemplary embodiment It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable Sequence.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description Member, but this division is not enforceable.In fact, according to an exemplary embodiment of the present disclosure, above-described two or More multimode or the feature and function of unit can embody in a module or unit.Conversely, above-described one A module or the feature and function of unit can be to be embodied by multiple modules or unit with further division.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure His embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Adaptive change follow the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or Conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by claim It points out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the attached claims.

Claims (15)

1. a kind of pupil positioning method characterized by comprising
Obtain target image;
One or more connected regions are extracted from the target image;
Determine the radian coefficient of the connected region;
Candidate pupil region is chosen from each connected region according to the radian coefficient;
Based on the size in each candidate pupil region, objective pupil region is determined from each candidate pupil region.
2. the method according to claim 1, wherein extracting one or more connected regions from the target image Domain includes:
One or more connected regions are extracted from the target image by default sliding window.
3. according to the method described in claim 2, it is characterized in that, the size based on each candidate pupil region, from Determine that objective pupil region includes: in each candidate pupil region
Calculate area ratio of each candidate pupil region in the default sliding window;
The area ratio and the immediate candidate pupil region of standard proportional are determined as the objective pupil region.
4. according to the method described in claim 2, it is characterized in that, described by presetting sliding window from the target image Extracting one or more connected regions includes:
The target image is traversed using the default sliding window, obtains multiple regional areas of the target image;
One or more connected regions are extracted in each regional area;
It is described candidate pupil region is chosen from each connected region according to the radian coefficient to include:
The maximum connected region of radian coefficient in each regional area is determined as the candidate pupil region.
5. the method according to claim 1, wherein described extract one or more connect from the target image Logical region includes:
One or more candidate connected regions are extracted from the target image;
For any candidate connected region Ai, obtain candidate connected region AiBoundary rectangle Di, and pass through following formula meter Calculate candidate connected region AiForm factor Q (Ai):
Wherein aiFor boundary rectangle DiLong side length, biFor boundary rectangle DiBond length;
The candidate connected region that the form factor meets preset condition is determined as the connected region.
6. the method according to claim 1, wherein the radian coefficient of the determination connected region includes:
For any connected region Bj, obtain connected region BjBoundary rectangle Dj
Obtain connected region BjArea S (Bj);
It is calculated by the following formula connected region BjRadian coefficients R (Bj):
Wherein, ajFor boundary rectangle DjLong side length, bjFor boundary rectangle DjBond length.
7. method according to claim 5 or 6, which is characterized in that the boundary rectangle includes external rotation rectangle.
8. the method according to claim 1, which is characterized in that described to determine objective pupil area from each candidate pupil region After domain, the method also includes:
The mass center in the objective pupil region is determined as pupil center.
9. the method according to claim 1, wherein after the acquisition target image, the method also includes:
The target image is pre-processed.
10. according to the method described in claim 9, it is characterized in that, the pretreatment includes any of the following or a variety of: ash Degreeization processing, gaussian filtering, binary conversion treatment and Morphological scale-space.
11. the method according to claim 1, wherein the method is applied to virtual reality device;The acquisition Target image includes:
Using the eyes image of the image capture module acquisition user of the virtual reality device, the eyes image is determined as The target image.
12. according to the method for claim 11, which is characterized in that described image acquisition module includes infrared imaging unit, The eyes image is infrared image.
13. a kind of pupil positioning device characterized by comprising
Target image obtains module, for obtaining target image;
Connected region extraction module, for extracting one or more connected regions from the target image;
Radian coefficient determination module, for determining the radian coefficient of the connected region;
Module is chosen in candidate region, for choosing candidate pupil region from each connected region according to the radian coefficient;
Target area determining module, for the size based on each candidate pupil region, from each candidate pupil region Determine objective pupil region.
14. a kind of electronic equipment characterized by comprising
Processor;And
Memory, for storing the executable instruction of the processor;
Wherein, the processor is configured to require 1-12 described in any item via executing the executable instruction and carry out perform claim Method.
15. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program Claim 1-12 described in any item methods are realized when being executed by processor.
CN201910002314.8A 2019-01-02 2019-01-02 Pupil positioning method, pupil positioning device, electronic equipment, storage medium Pending CN109784248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910002314.8A CN109784248A (en) 2019-01-02 2019-01-02 Pupil positioning method, pupil positioning device, electronic equipment, storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910002314.8A CN109784248A (en) 2019-01-02 2019-01-02 Pupil positioning method, pupil positioning device, electronic equipment, storage medium

Publications (1)

Publication Number Publication Date
CN109784248A true CN109784248A (en) 2019-05-21

Family

ID=66499834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910002314.8A Pending CN109784248A (en) 2019-01-02 2019-01-02 Pupil positioning method, pupil positioning device, electronic equipment, storage medium

Country Status (1)

Country Link
CN (1) CN109784248A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555875A (en) * 2019-07-25 2019-12-10 深圳壹账通智能科技有限公司 Pupil radius detection method and device, computer equipment and storage medium
CN111105410A (en) * 2019-12-27 2020-05-05 中国人民解放军陆军军医大学第二附属医院 Hematopoietic tissue proportion determining device and method based on bone marrow biopsy image
CN112162629A (en) * 2020-09-11 2021-01-01 天津科技大学 Real-time pupil positioning method based on circumscribed rectangle
CN112399074A (en) * 2020-10-10 2021-02-23 上海鹰瞳医疗科技有限公司 Pupil positioning method and device, electronic equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059836A (en) * 2007-06-01 2007-10-24 华南理工大学 Human eye positioning and human eye state recognition method
CN102456137A (en) * 2010-10-20 2012-05-16 上海青研信息技术有限公司 Sight line tracking preprocessing method based on near-infrared reflection point characteristic
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN103246865A (en) * 2012-02-03 2013-08-14 展讯通信(上海)有限公司 Method and device for detecting red eye and method and device for removing same
CN103778594A (en) * 2014-01-16 2014-05-07 天津大学 Red-eye detection method based on flashlight and non-flashlight image pairs
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
CN108392170A (en) * 2018-02-09 2018-08-14 中北大学 A kind of human eye follow-up mechanism and recognition positioning method for optometry unit

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059836A (en) * 2007-06-01 2007-10-24 华南理工大学 Human eye positioning and human eye state recognition method
CN102456137A (en) * 2010-10-20 2012-05-16 上海青研信息技术有限公司 Sight line tracking preprocessing method based on near-infrared reflection point characteristic
CN103246865A (en) * 2012-02-03 2013-08-14 展讯通信(上海)有限公司 Method and device for detecting red eye and method and device for removing same
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN103778594A (en) * 2014-01-16 2014-05-07 天津大学 Red-eye detection method based on flashlight and non-flashlight image pairs
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
CN108392170A (en) * 2018-02-09 2018-08-14 中北大学 A kind of human eye follow-up mechanism and recognition positioning method for optometry unit

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555875A (en) * 2019-07-25 2019-12-10 深圳壹账通智能科技有限公司 Pupil radius detection method and device, computer equipment and storage medium
CN111105410A (en) * 2019-12-27 2020-05-05 中国人民解放军陆军军医大学第二附属医院 Hematopoietic tissue proportion determining device and method based on bone marrow biopsy image
CN112162629A (en) * 2020-09-11 2021-01-01 天津科技大学 Real-time pupil positioning method based on circumscribed rectangle
CN112399074A (en) * 2020-10-10 2021-02-23 上海鹰瞳医疗科技有限公司 Pupil positioning method and device, electronic equipment and readable storage medium
CN112399074B (en) * 2020-10-10 2022-04-05 上海鹰瞳医疗科技有限公司 Pupil positioning method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109784248A (en) Pupil positioning method, pupil positioning device, electronic equipment, storage medium
CN108351961B (en) Biological recognition system and computer implemented method based on image
CN110619628B (en) Face image quality assessment method
JP4755202B2 (en) Face feature detection method
US8351662B2 (en) System and method for face verification using video sequence
US10445574B2 (en) Method and apparatus for iris recognition
US6611613B1 (en) Apparatus and method for detecting speaking person's eyes and face
US9355305B2 (en) Posture estimation device and posture estimation method
US7668338B2 (en) Person tracking method and apparatus using robot
US7266225B2 (en) Face direction estimation using a single gray-level image
US8977010B2 (en) Method for discriminating between a real face and a two-dimensional image of the face in a biometric detection process
KR102393298B1 (en) Method and apparatus for iris recognition
MX2012010602A (en) Face recognizing apparatus, and face recognizing method.
Holte et al. Fusion of range and intensity information for view invariant gesture recognition
CN110532965A (en) Age recognition methods, storage medium and electronic equipment
JP2021526269A (en) Object tracking methods and equipment, electronics and storage media
CN112446322B (en) Eyeball characteristic detection method, device, equipment and computer readable storage medium
Hebbale et al. Real time COVID-19 facemask detection using deep learning
CN113435452A (en) Electrical equipment nameplate text detection method based on improved CTPN algorithm
Vezhnevets Face and facial feature tracking for natural human-computer interface
CN112633222B (en) Gait recognition method, device, equipment and medium based on countermeasure network
KR101343623B1 (en) adaptive color detection method, face detection method and apparatus
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN108446639A (en) Low-power consumption augmented reality equipment
CN109492455A (en) Live subject detection and identity identifying method, medium, system and relevant apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination