CN108985172A - A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light - Google Patents

A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light Download PDF

Info

Publication number
CN108985172A
CN108985172A CN201810623544.1A CN201810623544A CN108985172A CN 108985172 A CN108985172 A CN 108985172A CN 201810623544 A CN201810623544 A CN 201810623544A CN 108985172 A CN108985172 A CN 108985172A
Authority
CN
China
Prior art keywords
structure light
eye
characteristic parameter
user
light image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810623544.1A
Other languages
Chinese (zh)
Inventor
姜欣
路伟成
刘伟
黄通兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Beijing Qixin Yiwei Information Technology Co Ltd
Original Assignee
Beijing Qixin Yiwei Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qixin Yiwei Information Technology Co Ltd filed Critical Beijing Qixin Yiwei Information Technology Co Ltd
Priority to CN201810623544.1A priority Critical patent/CN108985172A/en
Publication of CN108985172A publication Critical patent/CN108985172A/en
Priority to PCT/CN2019/089352 priority patent/WO2019237942A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention discloses a kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light.This method comprises:, to user face projective structure light, obtaining the structure light image of user face when monitoring that Eye-controlling focus event is triggered;The eye characteristic parameter of user is determined according to structure light image;Eye characteristic parameter is input in sight mapping model trained in advance, obtain coordinates of targets corresponding with eye characteristic parameter, wherein, trained sight mapping model is generated by artificial nerve network model according to the training data training of setting sets of numbers in advance, and training data includes standard eye characteristic parameter and corresponding standard target coordinate;The blinkpunkt of user is determined according to coordinates of targets.The embodiment of the present invention solve in the prior art, the accuracy of sight mapping model it is not high caused by Eye-controlling focus result the not high problem of accuracy, optimize sight mapping model, improve the accuracy of Eye-controlling focus result.

Description

A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light
Technical field
The present embodiments relate to machine vision technique more particularly to a kind of Eye-controlling focus methods based on structure light, dress It sets, equipment and storage medium.
Background technique
Eye-controlling focus is also referred to as eye movement tracking, is to obtain user using the various detection means such as electronics, machinery and optics to work as The technology in the direction that anterior optic pays attention to.Due to can be achieved it is direct to sight, accurately estimate, thus Eye Tracking Technique is man-machine The fields such as interaction, medical diagnosis and disability rehabilitation are widely used.The technological means generallyd use includes Face datection, people The detection of eye feature and tracking, head pose detection and sight modeling etc..Eye-controlling focus is formed according to system and gaze tracking The difference of method can specifically be divided into intrusive and non-intrusion type two ways.
The optical means in non-intrusion type is generallyd use in the prior art to realize, specifically: acquiring people using camera The image of face and eyes, and eye characteristic parameter is extracted using image processing method, sight mapping model is established, it is final to determine view Line direction, wherein eye characteristic parameter includes center coordinate of eye pupil.Sight mapping model refers to based on eye characteristic parameter and view Model constructed by mapping relations between line landing point coordinates (coordinates of targets).Sight mapping model accuracy height will affect view Line tracks the accuracy of result, usually above-mentioned to construct sight mapping mould based on mapping relations between the two are high-order moment The accuracy of type, the sight mapping model built using aforesaid way is not high, and robustness is poor, correspondingly, Eye-controlling focus knot The accuracy of fruit is not also high.
Summary of the invention
The embodiment of the present invention provides a kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light, with reality Now optimize sight mapping model, improves the accuracy of Eye-controlling focus result.
In a first aspect, the embodiment of the invention provides a kind of Eye-controlling focus method based on structure light, this method comprises:
When monitoring that Eye-controlling focus event is triggered, to user face projective structure light, the structure light of user face is obtained Image;
The eye characteristic parameter of user is determined according to the structure light image;
The eye characteristic parameter is input in sight mapping model trained in advance, obtains and join with the eye feature The corresponding coordinates of targets of number, wherein the sight mapping model trained in advance is by artificial nerve network model according to setting The training data training of sets of numbers generates, and the training data includes that standard eye characteristic parameter and corresponding standard target are sat Mark;
The blinkpunkt of user is determined according to the coordinates of targets.
Further, the sight mapping model trained in advance is by artificial nerve network model according to setting sets of numbers Training data training generate, the training data includes standard eye characteristic parameter and corresponding standard target coordinate, packet It includes:
Using the standard eye characteristic parameter as the input variable of the artificial neural network model, by the standard mesh Mark output variable of the coordinate as the artificial nerve network model;
According to the input variable, the output variable, the input weight of the artificial nerve network model and threshold value, really The output weight of the fixed artificial nerve network model, wherein the input weight of the artificial nerve network model is the people For the input node of artificial neural networks model to the weight for hiding node layer, the threshold value is the threshold value of the implicit node;
It is determined according to the input weight of the artificial nerve network model, the threshold value and the output weight described preparatory Trained sight mapping model.
Further, described when monitoring that Eye-controlling focus event is triggered, to user face projective structure light, obtain user The structure light image of face, comprising:
When monitoring that the Eye-controlling focus event is triggered, modulated structure is projected to user face by structure light source Light;
The structure light image of user face is obtained by video camera.
Further, the eye characteristic parameter that user is determined according to the structure light image, comprising:
Threedimensional model is generated according to the structure light image;
The eye characteristic parameter of the user is determined according to the threedimensional model.
It is further, described that threedimensional model is generated according to the structure light image, comprising:
Demodulate in the structure light image corresponding offset information at distorted position;
The offset information is converted into depth information;
The threedimensional model is generated according to the depth information.
It is further, described that threedimensional model is generated according to the structure light image, comprising:
Handled to obtain treated structure light image to the structure light image based on Image Pretreatment Algorithm;
The threedimensional model is generated according to treated the structure light image.
Further, the eye characteristic parameter includes center coordinate of eye pupil or iris centre coordinate.
Second aspect, the embodiment of the invention also provides a kind of Eye-controlling focus device based on structure light, the device include:
Structure light image obtains module, when for monitoring that Eye-controlling focus event is triggered, to user face projective structure Light obtains the structure light image of user face;
Eye characteristic parameter determining module, for determining the eye characteristic parameter of user according to the structure light image;
Coordinates of targets obtains module, for the eye characteristic parameter to be input to sight mapping model trained in advance In, obtain coordinates of targets corresponding with the eye characteristic parameter, wherein the sight mapping model trained in advance passes through people Artificial neural networks model is generated according to the training data training of setting sets of numbers, and the training data includes standard eye feature ginseng Several and corresponding standard target coordinate;
Blinkpunkt determining module, for determining the blinkpunkt of user according to the coordinates of targets.
The third aspect, the embodiment of the invention also provides a kind of equipment, which includes:
One or more processors;
Memory is executed for storing one or more programs by one or more of processors, so that one Or multiple processors realize the Eye-controlling focus method based on structure light as previously described.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer Program realizes the Eye-controlling focus method based on structure light as previously described when the program is executed by processor.
When the embodiment of the present invention is by monitoring that Eye-controlling focus event is triggered, to user face projective structure light, obtain The structure light image of user face determines the eye characteristic parameter of user further according to the structure light image, then by eye spy Sign parameter, which is input to, maps mould according to the sight that the training data training of setting sets of numbers generates by artificial nerve network model In type, coordinates of targets corresponding with eye characteristic parameter is obtained, the blinkpunkt of user is finally determined according to coordinates of targets.It solves In the prior art, since the accuracy of sight mapping model is not high, robustness is poor, caused Eye-controlling focus result it is accurate The not high problem of property, optimizes sight mapping model, improves the accuracy of Eye-controlling focus result.
Detailed description of the invention
Fig. 1 is the flow chart of Eye-controlling focus method of one of the embodiment of the present invention one based on structure light;
Fig. 2 is the flow chart of Eye-controlling focus method of one of the embodiment of the present invention two based on structure light;
Fig. 3 is the structural schematic diagram of Eye-controlling focus device of one of the embodiment of the present invention three based on structure light;
Fig. 4 is the structural schematic diagram of one of the embodiment of the present invention four equipment.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is a kind of flow chart for Eye-controlling focus method based on structure light that the embodiment of the present invention one provides, this implementation The case where example is applicable to the Eye-controlling focus based on structure light, this method can be by being held based on the Eye-controlling focus device of structure light Row, the device can realize that the device can be configured in equipment by the way of software and/or hardware.As shown in Figure 1, should Method specifically comprises the following steps:
Step 110 when monitoring that Eye-controlling focus event is triggered, to user face projective structure light, obtains user face Structure light image.
In a specific embodiment of the present invention, Eye-controlling focus event can be indicated in computer field, psychological field, wide The event executed in announcement field or industrial circle based on sight.Specifically, being applied to computer field, Eye-controlling focus is by sight It completes exchanging between user and terminal (such as computer, mobile phone or tablet computer), can replace using the sight of user The operation that mouse or user's finger carry out on the touchscreen, the drop point of user's sight are exactly mouse pointer or user's finger in terminal Coordinate on screen when using line of sight operation terminal, completes corresponding operation by the direction of mobile sight.Illustratively, such as The page can be completed to scroll down through by mobile sight, complete to read webpage or e-book;By staring rule in a region It fixes time and above completes to click or double click operation;Dummy keyboard is created in operation interface, can complete the function of text input Energy;Old man or disabled person can be helped to control domestic electric appliance;It is modified by the record to blinkpunkt to image, it can The intention of user is held well;It can be decided whether to highlight in this point surrounding field according to user's sight drop point residence time Out.Applied to psychological field, such as the physiological psychology investigation, psychology of reading research and visual search mechanism of profile illusion Research etc..Applied to advertisement field, businessman can analyze the psychology of consumer by counting the data of blinkpunkt, determine commodity Validity and welcome degree, the consumption propensity for understanding consumer can contribute to push or help the side of action of marketing To.Applied to Industrial Engineering field, Eye-controlling focus is applied in flight simulator, the eyes of pilot when recording simulated flight With head situation of movement, provide pilot watches point analysis data attentively;Eye-controlling focus is applied in vehicle drive, checks and drives The case where fatigue state of people, the eye state of driver is different under different situations, eye motion are also different.It below will be with user Based on being illustrated for Eye-controlling focus at the terminal reading electronic book.
In order to complete based on relevant operation involved in Eye-controlling focus at the terminal reading electronic book process, such as page turning, choosing Chapters and sections or Returning catalogue etc. are selected, following below scheme is generally needed to be implemented: firstly, obtaining the image of user face, image being carried out Identification, obtains the image of corresponding eyes, then determine eye characteristic parameter;It then, will be true according to sight mapping relations The eye characteristic parameter made is converted to the coordinates of targets corresponding to the target area on terminal display screen;Finally, according to target Coordinate determines the blinkpunkt of user.And then corresponding operation is executed according to the sight track that blinkpunkt determines.From it is above-mentioned can Know, in order to determine the blinkpunkt of user, it is necessary first to obtain the image at eyes of user position.In order to obtain eyes of user position Image can project infrared light or structure light using to user face, imaging sensor (such as video camera) is recycled to acquire user The image of face carries out the mode that identification obtains the image of eyes to face image.Wherein, structure light is known spatial side To throw light set.Method of structured light is based on the measuring principle of optic triangle method measurement, and concrete operating principle is: Optical projection device projects controllable luminous point, striation or smooth surface structure to measured object (such as user face) surface, on the surface shape At the 3-D image modulated by measured object surface shape.The 3-D image (is such as taken the photograph by the imaging sensor in another location Camera) it obtains, to obtain the two dimensional image comprising distortion information, by system geometrical relationship, calculated using triangle principle To the three-dimensional coordinate of measured object.It, can be to user face projective structure light, to obtain user when measured object is user face The structure light image of face.
Step 120, the eye characteristic parameter that user is determined according to structure light image.
In a specific embodiment of the present invention, the eye feature of user can be determined from the structure light image of user face Parameter, principle are as follows: the structure light image that eyes of user can be identified from the structure light image of user face, further according to The structure light image of family eyes demodulates the three-dimensional information of eyes of user, obtains the three-dimensional coordinate of eyes of user, then be based on eye Position of the feature in eyes, determines user's eye characteristic parameter.Wherein, eye feature can specifically include pupil position, Pupil shape, pupil radium, iris position, iris shape, eyelid position, canthus position or facula position (or Purkinje image Position) etc..Eye characteristic parameter can specifically include center coordinate of eye pupil or iris centre coordinate.Separately below according to structure Light image is illustrated for determining the center coordinate of eye pupil and iris centre coordinate of user.Exemplary one: from eyes of user The three-dimensional information that eyes of user is demodulated in structure light image obtains the three-dimensional coordinate of eyes of user, further according to pupil position, pupil The position of hole shape and pupil radium in eyes, determines the center coordinate of eye pupil of user.Above-mentioned setting is advantageous in that: In compared with the prior art, determine used by user's center coordinate of eye pupil according to pupil shape with pupil gray areas entire The shape of pupil is defaulted as circle by the characteristic distributions in eye image gray areas, rough using mixed projection function mode Pupil center is positioned, is accurately positioned the mode of pupil center with the pupil center's pixel value found out according to the size of pupil radium, For which is complicated for operation, the complexity of determining user's center coordinate of eye pupil is reduced, correspondingly, also improving Eye-controlling focus Efficiency.Exemplary two: demodulating the three-dimensional information of eyes of user from the structure light image of eyes of user, obtain eyes of user Three-dimensional coordinate determine the iris of user further according to the position of iris position, iris shape and iris radius in eyes Centre coordinate.Above-mentioned setting is advantageous in that, compared with the prior art in, determine and pass through calculating used by iris centre coordinate Iris profile constitutes elliptical center, in conjunction with iris size and target reference iris plane in presented iris image and target reference iris image It to the distance of image collecting device, is calculated for the mode of the iris centre coordinate in presented iris plane, reduces really The complexity of client iris centre coordinate is determined, correspondingly, also improving the efficiency of Eye-controlling focus.
Eye characteristic parameter is input in sight mapping model trained in advance by step 130, is obtained and is joined with eye feature The corresponding coordinates of targets of number, wherein trained line of sight model is by artificial nerve network model according to setting sets of numbers in advance Training data generates, and training data includes standard eye characteristic parameter and corresponding standard target coordinate.
In a specific embodiment of the present invention, it from the foregoing it will be appreciated that after determining eye characteristic parameter, needs again The eye characteristic parameter determined is converted to the mesh corresponding to the target area on terminal display screen according to sight mapping relations Coordinate is marked, the blinkpunkt of user is finally determined according to coordinates of targets.Wherein, sight mapping relations described here can be by pre- First trained sight mapping model obtains.More specifically, it in standard database, obtains user and is executing Eye-controlling focus sight mistake The training data of the setting sets of numbers generated in journey, training data include standard eye characteristic parameter and corresponding standard target Coordinate;Using standard eye characteristic parameter as input variable, using corresponding standard target coordinate as output variable, according to machine learning Algorithm is trained preset mathematical model and generates sight mapping model trained in advance.Wherein, standard database specifically may be used To generate in the following manner: user can demarcate at least one position during executing Eye-controlling focus on a terminal screen in advance It sets as standard target position, when standard target position to be placed in the first coordinate system of setting, can know standard The coordinate of target position later, can allow user according to preparatory using the coordinate of the standard target position as standard target coordinate The sequence of setting successively watches the standard target position attentively, and it is special successively to record the corresponding eye in each standard target position simultaneously The corresponding position of parameter is levied, each standard eye characteristic parameter position is obtained, is placed in when by standard eye characteristic parameter position When in the second coordinate system of setting, the coordinate of standard eye characteristic parameter position can be known, by standard eye characteristic parameter The coordinate of position is as standard eye characteristic parameter.More specifically, the acquisition modes of each standard eye characteristic parameter are same The structure light image of eyes of user can be obtained, user is determined according to structure light image using to eyes of user projective structure light Standard eye characteristic parameter mode determine, it is of course possible to it is understood that the method for determination of standard eye characteristic parameter can root It is determined, is not specifically limited herein according to actual conditions.The training data of setting sets of numbers can be from the same user Training data be formed by sets of numbers, be also possible to the training data from different user and be formed by sets of numbers, specifically may be used It is set, is also not especially limited herein according to the actual situation.Preferably, it is contemplated that the specificity of eyes of user sets number The training data of amount group is that the training data from the same user is formed by sets of numbers, the advantages of this arrangement are as follows: it can So that the accuracy of forecast based on this sight mapping model established is more preferable.Wherein, training data includes standard eye feature Parameter and corresponding standard target coordinate, i.e. every group of training data are by standard eye characteristic parameter and corresponding standard target Coordinate is formed by data to composition.Preset mathematical model is trained according to machine learning algorithm and generates training in advance Sight mapping model, wherein machine learning algorithm may include artificial neural network algorithm and ant group algorithm, specifically can basis Actual conditions are set, and are not specifically limited herein.
Optionally, it selects artificial neural network algorithm to be trained preset mathematical model and generates sight trained in advance Mapping model, it is understood that for sight mapping model trained in advance by artificial nerve network model according to setting sets of numbers Training data training generate.Wherein, artificial neural network (Artificial Neural Networks, ANN) is based on life The basic principle of neural network in object is opened up after understanding and being abstracted human brain structure and environmental stimuli response mechanism with network Flutterring knowledge is theoretical basis, simulates the nervous system of human brain to a kind of mathematical model of the treatment mechanism of complex information.The model The complexity of system is specifically relied on, by adjusting weight interconnected between internal great deal of nodes (neuron), is come real Now handle information.It will be understood, sight mapping model accuracy height will affect the accuracy of Eye-controlling focus result, Therefore, in order to ensure the accuracy of Eye-controlling focus result, need to improve the accuracy of sight mapping model.Since ANN has certainly Study, adaptive, self-organizing, the advantage that non-linear and operation depth is parallel, therefore, using ANN model to setting sets of numbers Training data is trained, and generating sight mapping model trained in advance can be improved the performance of sight mapping model.Compared to Accuracy of the prior art based on sight mapping model constructed by high-order moment be not high, and robustness is poor, correspondingly, sight Track result accuracy it is high yet for, sight mapping model is constructed using ANN model can be improved sight mapping model Accuracy, in turn, improve Eye-controlling focus result accuracy.
Step 140, the blinkpunkt that user is determined according to coordinates of targets.
In a specific embodiment of the present invention, it is based on each coordinates of targets, the blinkpunkt of user is determined, further according to blinkpunkt Determine the sight track of user it will be appreciated that, the corresponding blinkpunkt of each coordinates of targets.As it was noted above, below with Family is based on Eye-controlling focus and is illustrated for reading electronic book at the terminal.Specific: user just reads a certain electricity at the terminal Certain one page of the philosophical works, in order to read lower one page, needs to carry out corresponding page turn over operation, to turn over when user reads current page To lower one page, at this point it is possible to establish the corresponding relationship of sight track with corresponding page turn over operation in advance.Such as, sight track is served as reasons When the line segment from top to bottom of each blinkpunkt formed one, the operation of downward page turning is corresponded to.Thus it is possible to according to Family eyes are in moving process, obtained each eye characteristic parameter, are based on sight mapping model, obtain corresponding each Coordinates of targets determines each blinkpunkt of user according to each coordinates of targets, the view of user is determined further according to each blinkpunkt Line tracking, more specifically, the sight track are a line segment from top to bottom to be formed by each blinkpunkt, In, each coordinates of targets respectively corresponds each blinkpunkt on terminal screen.And then it is turned over based on sight track with corresponding The corresponding relationship of page operations, judge the sight track formed at this time it is corresponding be downward page turn over operation.
The technical solution of the embodiment of the present invention when by monitoring that Eye-controlling focus event is triggered, is projected to user face Structure light obtains the structure light image of user face, the eye characteristic parameter of user is determined further according to the structure light image, is connect By eye characteristic parameter be input to by artificial nerve network model according to setting sets of numbers training data training generate In sight mapping model, coordinates of targets corresponding with eye characteristic parameter is obtained, the note of user is finally determined according to coordinates of targets Viewpoint.It solves in the prior art, since the accuracy of sight mapping model is not high, robustness is poor, and caused sight chases after The not high problem of the accuracy of track result, optimizes sight mapping model, improves the accuracy of Eye-controlling focus result.
Optionally, based on the above technical solution, sight mapping model trained in advance passes through artificial neural network Model is generated according to the training data of setting sets of numbers, and training data includes standard eye characteristic parameter and corresponding standard mesh Coordinate is marked, can specifically include:
Using standard eye characteristic parameter as the input variable of artificial nerve network model, using standard target coordinate as people The output variable of artificial neural networks model.
According to input variable, output variable, the input weight of artificial nerve network model and threshold value, artificial neural network is determined The output weight of network model, wherein the input weight of artificial nerve network model is the input node of artificial nerve network model To the weight of hiding node layer, threshold value is to hide the threshold value of node layer.
Sight mapping model is determined according to the input weight of artificial nerve network model, threshold value and output weight.
In a specific embodiment of the present invention, ANN model is by being connected with each other structure between a large amount of node (or neuron) At a kind of each specific output function of node on behalf, referred to as activation primitive.Connection between every two node all represent one it is right In the weight for passing through the connection signal, the output of ANN model depends on the structure of network, the connection type of network, weight and swashs Function living.It is illustrated so that ANN model is three-decker as an example, i.e., the ANN model includes input layer, hidden layer and output layer. Correspondingly, it should be noted that the input weight of ANN model is the input node of ANN model to the weight of hiding node layer; The threshold value of ANN model is to hide the threshold value of node layer.It should also be noted that, the threshold of the input weight and ANN model of ANN model Value is set at random.The sight for generating and training in advance is trained based on training data of the ANN model to setting sets of numbers to reflect Penetrate the basic principle of model are as follows: using standard eye characteristic parameter as the input variable of ANN model, and by standard target coordinate As the output variable of ANN model, according to input variable, output variable, the threshold value for inputting weight and ANN model of ANN model The output weight for determining ANN model, when the output weight of the input weight, the threshold value of ANN model and ANN model of ANN model is gone Behind top, a determining sight mapping model trained in advance can be obtained.
Illustratively, such as it is set with M group training data (xi,yi), wherein xiI-th of input variable is indicated, for three-dimensional Vector;yiIt is bivector for i-th of output variable;ωiFor the input weight of i-th input node and hiding node layer; θiFor the threshold value of i-th of hiding node layer;βiFor i-th of output weight.There are N number of hiding node layer, g (x) is activation primitive, Then ANN model isWherein, j=1,2 ..., M.Pass through input variable xi, ANN model input Weight ωiWith the threshold θ of ANN modeliTo preset ANN modelIt is trained, determines output Weight βi.Input weight ω based on ANN model againi, ANN model threshold θiWith the output weight β of ANN modeliIt determines pre- First trained sight mapping model.
Optionally, based on the above technical solution, it when monitoring that Eye-controlling focus event is triggered, is thrown to user face Structure light is penetrated, the structure light image of user face is obtained, can specifically include:
When monitoring that Eye-controlling focus event is triggered, modulated structure light is projected to user face by structure light source.
The structure light image of user face is obtained by video camera.
In a specific embodiment of the present invention, the difference of the beam mode based on optical projection device institute projective structure light, knot Structure light method particularly may be divided into structure light method, line-structured light method, multi-line structured light method, area-structure light method and phase method.Accordingly , optical projection device can be divided into structure light projector, line-structured light projector, multiple line structure light projector and area-structure light Projector etc..The light beam to be projected by optical projection device is understood that based on above-mentioned, modulated structure light.Correspondingly, adjusting The structure light made can also be divided into the cable architecture projected by the structure light of structure light projector projection, line-structured light projection Light, the multi-line structured light of multiple line structure light projector projection and area-structure light of area-structure light projector projection etc..Work as measured object For user face when, specifically can be in the following way in order to get the structure light image of user face: firstly, structure Light source projects modulated structure light to user face, and video camera obtains the structure light image of user face.Below with modulated Structure light be to be further described for the line-structured light projected by line-structured light projector.
Line-structured light projector is that the point light source generated by laser is changed by lens, correspondingly, can manage It solves, line-structured light is the extension of structure light.The basic principle of line-structured light method are as follows: line-structured light projector is to measured object (such as user face) incident line structure light, the striation changed with the variation of measured object shape is formd on measured object surface, When its embodiment is that distortion has occurred with discontinuously in striation in the picture, the degree of distortion is related with the depth on measured object surface, Relative position simultaneously also between line-structured light projector and video camera is related, discontinuously then shows the object on measured object surface Manage gap.By being analyzed in the optical strip image information to the distortion got, so that it is determined that the three-dimensional of measured object surface is sat Mark.It should be noted that since the information content got based on line-structured light method is same compared to what structure light method greatly increased When, the complexity of system not only improves the speed of data acquisition there is no increasing in this way, moreover it is possible to obtain higher measurement essence Degree.It should also be noted that, the increase of the depth with measured object surface, then the distortion degree of striation also increases with it.In addition, Separately it should be noted that the depth on measured object surface refers to measured object surface relative to the depth between the plane of reference of setting. Correspondingly, in order to get the structure light image of user face, can specifically be used as follows when measured object is the face of user Mode: firstly, line-structured light projector forms striation to user face incident line structure light, in user's facial surface;Then, it takes the photograph Camera obtains striation, and the structure light image of user face is generated according to striation.
Optionally, based on the above technical solution, determine that the eye feature of user is joined according to the structure light image Number, can specifically include:
Threedimensional model is generated according to structure light image.
The eye characteristic parameter of user is determined according to threedimensional model.
In a specific embodiment of the present invention, according to it is described previously it is found that eyes of user structure light image in carry The depth information on eyes of user surface can demodulate the depth information on eyes of user surface from structure light image, so as to To determine the three-dimensional coordinate on eyes of user surface, and then eyes of user is shown to carry out three-dimensionalreconstruction, available eyes of user Threedimensional model, the eye characteristic parameter of user is determined further according to threedimensional model.
Optionally, based on the above technical solution, threedimensional model is generated according to structure light image, specifically can wrap It includes:
Corresponding offset information at distorted position in demodulation structure light image.
Offset information is converted into depth information.
Threedimensional model is generated according to depth information.
In a specific embodiment of the present invention, according to described previously it is found that distortion degree can be interpreted as at distorted position Corresponding offset information, i.e. distortion degree are bigger, correspondingly, deviant representated by offset information is also bigger.Due to distortion Degree is related with the depth on eyes of user surface, therefore, it is possible to which offset information is converted to depth information, may thereby determine that use The three-dimensional coordinate of family ocular surface, and then eyes of user is shown to carry out three-dimensionalreconstruction, the three-dimensional mould of available eyes of user Type.
Optionally, based on the above technical solution, threedimensional model is generated according to structure light image, specifically can wrap It includes:
Handled to obtain treated structure light image to structure light image based on Image Pretreatment Algorithm.
According to treated, structure light image generates threedimensional model.
In a specific embodiment of the present invention, the eye feature for directly affecting user is joined due to the quality of structure light image Several accuracy, and then also will affect the accuracy of Eye-controlling focus result.It is below by line-structured light with modulated structure light It is further described for the line-structured light of projector projection.The most perfect condition of structure light image is not include any back Scape, only striation position and striation grayscale information, i.e. gray value are symmetrical Gaussian Profiles, this is conducive to structure light image Information analysis.But in the actual operation process, structure light image will receive the interference of many factors and change, and such as scheme As noise or complex background etc..Specifically, influence structure light image quality factor may include light source factor, ambient noise with And measured object etc..More specifically, for light source factor, in online structured-light system, light source can be divided into line-structured light and outer Boundary's light source.Under normal circumstances, the variation of line-structured light parameter is regular governed, it mainly include optical plane thickness, Diffractive and light intensity etc., this influence can be eliminated by means appropriate, such as select to have according to practical situations There is the line-structured light projector of suitable line width and light intensity.External light source is mainly environment light, such as sunlight or incandescent lamp, it Variation can cause the difference of structure light image contrast, seriously affected structure light image and carried out automatic threshold segmentation.For this The picture contrast of kind of structure light image is different, can using image normalization method etc., can be improved in this way structure light image at Image quality amount.For ambient noise, external environment, image pick-up card and video camera etc. can all generate noise, these noises are random , it is complicated, Gaussian noise or salt-pepper noise etc. can be brought to structure light image, can be existed by suitable image processing algorithm It is not eliminated as much as under the premise of loss structure light image details, improves structure light image signal-to-noise ratio, obtain the structure of high quality Light image.For measured object, line-structured light is projected on measured object, due to the surface reflectivity of measured object, shape or color etc. The different degrees of influence structure light image in capital.It is needed to retain while reducing interference of the various factors to structure light image again The grayscale information wanted is needed to structure light image therefore, it is necessary to using Image Pretreatment Algorithm to the structure light figure got As being handled, Image Pretreatment Algorithm may include image filtering, image enhancement or image segmentation etc..More specifically, scheme It is a kind of processing method for effectively inhibiting structure light image noise as filtering, may include mean filter, gaussian filtering, intermediate value Filtering and adaptive-filtering etc..The purpose of image segmentation is identification target part and background parts, proposes only to include structure light figure Picture.Image segmentation is broadly divided into edge detection, Threshold segmentation or region segmentation etc..It should be noted that can be according to practical feelings Condition selects corresponding Image Pretreatment Algorithm, is not specifically limited herein.
By being handled structure light image based on Image Pretreatment Algorithm, the structure light image that obtains that treated, then According to treated, structure light image generates threedimensional model, and the quality for improving structure light image may be implemented, and then will also improve The accuracy of Eye-controlling focus result.
Optionally, based on the above technical solution, eye characteristic parameter can specifically include center coordinate of eye pupil or Iris centre coordinate.
Optionally, based on the above technical solution, Image Pretreatment Algorithm can specifically include image filtering, image At least one of enhancing or image segmentation.
In a specific embodiment of the present invention, Image Pretreatment Algorithm can be selected according to the actual situation, not made herein It is specific to limit.
It should be noted that specific implementation of the invention is the overall architecture based on light transilluminator, video camera and terminal, The completion of common Coordination Treatment data.
Embodiment two
Fig. 2 is a kind of flow chart of the Eye-controlling focus method based on structure light provided by Embodiment 2 of the present invention, this implementation The case where example is applicable to the Eye-controlling focus based on structure light, this method can be by being held based on the Eye-controlling focus device of structure light Row, the device can realize that the device can be configured in equipment by the way of software and/or hardware.As shown in Fig. 2, should Method specifically comprises the following steps:
Step 201 when monitoring that Eye-controlling focus event is triggered, projects cable architecture to user face by structure light source Light.
Step 202 passes through the structure light image of camera user face.
Corresponding offset information at distorted position in step 203, demodulation structure light image.
Offset information is converted to depth information by step 204.
Step 205 generates threedimensional model according to depth information.
Step 206, the eye characteristic parameter that user is determined according to threedimensional model.
Step 207, in standard database, obtain setting sets of numbers training data, training data includes standard eye Characteristic parameter and standard target coordinate.
Step 208, using standard eye characteristic parameter as the input variable of artificial nerve network model, standard target is sat It is denoted as the output variable for artificial nerve network model.
Step 209, according to input variable, output variable, the input weight of artificial nerve network model and threshold value, determine people The output weight of artificial neural networks model, wherein the input weight of artificial nerve network model is artificial nerve network model For input node to the weight for hiding node layer, threshold value is to hide the threshold value of node layer.
Step 210 determines view trained in advance according to the input weight of artificial nerve network model, threshold value and output weight Line mapping model.
Eye characteristic parameter is input in sight mapping model trained in advance by step 211, is obtained and is joined with eye feature The corresponding coordinates of targets of number.
Step 212, the blinkpunkt that user is determined according to coordinates of targets.
In a specific embodiment of the present invention, it should be noted that eye characteristic parameter can specifically include pupil center Coordinate or iris centre coordinate.
The technical solution of the embodiment of the present invention when by monitoring that Eye-controlling focus event is triggered, is projected to user face Structure light obtains the structure light image of user face, the eye characteristic parameter of user is determined further according to the structure light image, is connect By eye characteristic parameter be input to by artificial nerve network model according to setting sets of numbers training data training generate In sight mapping model, coordinates of targets corresponding with eye characteristic parameter is obtained, the note of user is finally determined according to coordinates of targets Viewpoint.It solves in the prior art, since the accuracy of sight mapping model is not high, robustness is poor, and caused sight chases after The not high problem of the accuracy of track result, optimizes sight mapping model, improves the accuracy of Eye-controlling focus result.
Embodiment three
Fig. 3 is a kind of structural schematic diagram for Eye-controlling focus device based on structure light that the embodiment of the present invention three provides, this Embodiment is applicable to the case where Eye-controlling focus based on structure light, which can be real by the way of software and/or hardware Existing, which can be configured in equipment.As shown in figure 3, the device specifically includes:
Structure light image obtains module 310, when for monitoring that Eye-controlling focus event is triggered, projecting and tying to user face Structure light obtains the structure light image of user face;
Eye characteristic parameter determining module 320, for determining the eye characteristic parameter of user according to structure light image;
Coordinates of targets obtains module 330, for eye characteristic parameter to be input in sight mapping model trained in advance, Obtain coordinates of targets corresponding with eye characteristic parameter, wherein trained sight mapping model passes through artificial neural network in advance Model is generated according to the training data training of setting sets of numbers, and training data includes standard eye characteristic parameter and corresponding mark Quasi- coordinates of targets;
Blinkpunkt determining module 340, for determining the blinkpunkt of user according to coordinates of targets.
The technical solution of the embodiment of the present invention obtains module 310 by structure light image and monitors Eye-controlling focus event quilt When triggering, to user face projective structure light, the structure light image of user face, eye characteristic parameter determining module 320 are obtained The eye characteristic parameter of user is determined further according to the structure light image, then coordinates of targets obtains module 330 for eye feature Parameter is input to the sight mapping model generated by artificial nerve network model according to the training data training of setting sets of numbers In, coordinates of targets corresponding with eye characteristic parameter is obtained, last blinkpunkt determining module 340 determines user according to coordinates of targets Blinkpunkt.It solves in the prior art, since the accuracy of sight mapping model is not high, robustness is poor, caused view The not high problem of the accuracy of line tracking result, optimizes sight mapping model, improves the accuracy of Eye-controlling focus result.
Optionally, based on the above technical solution, coordinates of targets obtains module 330, can specifically include:
Input determines submodule with output variable, for using standard eye characteristic parameter as artificial nerve network model Input variable, using standard target coordinate as the output variable of artificial nerve network model;
Output weight determines submodule, for being weighed according to the input of input variable, output variable, artificial nerve network model Value and threshold value, determine the output weight of artificial nerve network model, wherein the input weight of artificial nerve network model is artificial The input node of neural network model hides the weight of node layer to hidden layer, and threshold value is the threshold value that hidden layer hides node layer;
Sight mapping model determines submodule, for input weight, threshold value and the output according to artificial nerve network model Weight determines sight mapping model trained in advance.
Optionally, based on the above technical solution, structure light image obtains module 310, can specifically include:
Project structured light submodule, when for monitoring that Eye-controlling focus event is triggered, by structure light source to user's face Portion's incident line structure light;
Structure light image generates submodule, for obtaining the structure light image of user face by video camera.
Optionally, based on the above technical solution, eye characteristic parameter determining module 320, can specifically include:
Threedimensional model generates submodule, for generating threedimensional model according to structure light image;
Eye characteristic parameter determines submodule, for determining the eye characteristic parameter of user according to threedimensional model.
Optionally, based on the above technical solution, threedimensional model generates submodule, can specifically include:
Offset information acquiring unit, for corresponding offset information at distorted position in demodulation structure light image;
Depth Information Acquistion unit, for offset information to be converted to depth information;
The first generation unit of threedimensional model, for generating threedimensional model according to depth information.
Optionally, based on the above technical solution, threedimensional model generates submodule, can specifically include:
Structure light image acquiring unit that treated, for being handled based on Image Pretreatment Algorithm structure light image The structure light image that obtains that treated;
The second generation unit of threedimensional model, for structure light image to generate threedimensional model according to treated.
Optionally, based on the above technical solution, eye characteristic parameter can specifically include center coordinate of eye pupil or Iris centre coordinate.
Optionally, based on the above technical solution, Image Pretreatment Algorithm includes image filtering, image enhancement or figure As at least one of segmentation.
Any embodiment of that present invention institute can be performed in Eye-controlling focus device based on structure light provided by the embodiment of the present invention The Eye-controlling focus method based on structure light provided, has the corresponding functional module of execution method and beneficial effect.
Example IV
Fig. 4 is a kind of structural schematic diagram for equipment that the embodiment of the present invention four provides.Fig. 4, which is shown, to be suitable for being used to realizing this The block diagram of the example devices 412 of invention embodiment.The equipment 412 that Fig. 4 is shown is only an example, should not be to the present invention The function and use scope of embodiment bring any restrictions.
As shown in figure 4, equipment 412 is showed in the form of universal computing device.The component of equipment 412 may include but unlimited In one or more processor 416, system storage 428, it is connected to different system components (including system storage 428 He Processor 416) bus 418.
Bus 418 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC) Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Equipment 412 typically comprises a variety of computer system readable media.These media can be it is any can be by equipment The usable medium of 412 access, including volatile and non-volatile media, moveable and immovable medium.
System storage 428 may include the computer system readable media of form of volatile memory, such as deposit at random Access to memory (RAM) 430 and/or cache memory 432.Equipment 412 may further include other removable/not removable Dynamic, volatile/non-volatile computer system storage medium.Only as an example, storage system 434 can be used for read and write can not Mobile, non-volatile magnetic media (Fig. 4 do not show, commonly referred to as " hard disk drive ").Although not shown in fig 4, Ke Yiti For the disc driver for being read and write to removable non-volatile magnetic disk (such as " floppy disk "), and to moving non-volatile light The CD drive of disk (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each driver It can be connected by one or more data media interfaces with bus 418.Memory 428 may include that at least one program produces Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform of the invention each The function of embodiment.
Program/utility 440 with one group of (at least one) program module 442, can store in such as memory In 428, such program module 442 includes but is not limited to operating system, one or more application program, other program modules And program data, it may include the realization of network environment in each of these examples or certain combination.Program module 442 Usually execute the function and/or method in embodiment described in the invention.
Equipment 412 can also be logical with one or more external equipments 414 (such as keyboard, sensing equipment, display 424 etc.) Letter, can also be enabled a user to one or more equipment interact with the equipment 412 communicate, and/or with make the equipment 412 Any equipment (such as network interface card, modem etc.) that can be communicated with one or more of the other calculating equipment communicates.This Kind communication can be carried out by input/output (I/O) interface 422.Also, equipment 412 can also by network adapter 420 with One or more network (such as local area network (LAN), wide area network (WAN) and/or public network, such as internet) communication.Such as Shown in figure, network adapter 420 is communicated by bus 418 with other modules of equipment 412.It should be understood that although not showing in Fig. 4 Out, other hardware and/or software module can be used with bonding apparatus 412, including but not limited to: microcode, device driver, superfluous Remaining processing unit, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processor 416 by the program that is stored in system storage 428 of operation, thereby executing various function application and Data processing, such as realize a kind of Eye-controlling focus method based on structure light provided by the embodiment of the present invention, comprising:
When monitoring that Eye-controlling focus event is triggered, to user face projective structure light, the structure light of user face is obtained Image.
The eye characteristic parameter of user is determined according to structure light image.
Eye characteristic parameter is input in sight mapping model trained in advance, is obtained corresponding with eye characteristic parameter Coordinates of targets, wherein trained sight mapping model is by artificial nerve network model according to the training of setting sets of numbers in advance Data training generates, and training data includes standard eye characteristic parameter and corresponding standard target coordinate.
The blinkpunkt of user is determined according to coordinates of targets.
Embodiment five
The embodiment of the present invention five additionally provides a kind of computer readable storage medium, is stored thereon with computer program, should A kind of Eye-controlling focus method based on structure light, the party as provided by the embodiment of the present invention are realized when program is executed by processor Method includes:
When monitoring that Eye-controlling focus event is triggered, to user face projective structure light, the structure light of user face is obtained Image.
The eye characteristic parameter of user is determined according to structure light image.
Eye characteristic parameter is input in sight mapping model trained in advance, is obtained corresponding with eye characteristic parameter Coordinates of targets, wherein trained sight mapping model is by artificial nerve network model according to the training of setting sets of numbers in advance Data training generates, and training data includes standard eye characteristic parameter and corresponding standard target coordinate.
The blinkpunkt of user is determined according to coordinates of targets.
The computer storage medium of the embodiment of the present invention, can be using any of one or more computer-readable media Combination.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or Device, or any above combination.The more specific example (non exhaustive list) of computer readable storage medium includes: tool There are electrical connection, the portable computer diskette, hard disk, random access memory (RAM), read-only memory of one or more conducting wires (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage Medium can be any tangible medium for including or store program, which can be commanded execution system, device or device Using or it is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited In wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, It further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with It fully executes, partly execute on the user computer on the user computer, being executed as an independent software package, portion Divide and partially executes or executed on a remote computer or server completely on the remote computer on the user computer.? Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including local area network (LAN) or Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as mentioned using Internet service It is connected for quotient by internet).
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (10)

1. a kind of Eye-controlling focus method based on structure light characterized by comprising
When monitoring that Eye-controlling focus event is triggered, to user face projective structure light, the structure light image of user face is obtained;
The eye characteristic parameter of user is determined according to the structure light image;
The eye characteristic parameter is input in sight mapping model trained in advance, is obtained and the eye characteristic parameter pair The coordinates of targets answered, wherein the sight mapping model trained in advance is by artificial nerve network model according to setting quantity The training data training of group generates, and the training data includes standard eye characteristic parameter and corresponding standard target coordinate;
The blinkpunkt of user is determined according to the coordinates of targets.
2. the method according to claim 1, wherein the sight mapping model trained in advance passes through artificial mind Generated through network model according to the training data training of setting sets of numbers, the training data include standard eye characteristic parameter with And corresponding standard target coordinate, comprising:
Using the standard eye characteristic parameter as the input variable of the artificial nerve network model, the standard target is sat It is denoted as the output variable of the artificial nerve network model;
According to the input variable, the output variable, the input weight of the artificial nerve network model and threshold value, institute is determined State the output weight of artificial nerve network model, wherein the input weight of the artificial nerve network model is the artificial mind For input node through network model to the weight for hiding node layer, the threshold value is the threshold value of the hiding node layer;
The preparatory training is determined according to the input weight of the artificial nerve network model, the threshold value and the output weight Sight mapping model.
3. the method according to claim 1, wherein described when monitoring that Eye-controlling focus event is triggered, Xiang Yong Family face projective structure light obtains the structure light image of user face, comprising:
When monitoring that the Eye-controlling focus event is triggered, modulated structure light is projected to user face by structure light source;
The structure light image of user face is obtained by video camera.
4. according to the method described in claim 3, it is characterized in that, the eye for determining user according to the structure light image Characteristic parameter, comprising:
Threedimensional model is generated according to the structure light image;
The eye characteristic parameter of the user is determined according to the threedimensional model.
5. according to the method described in claim 4, it is characterized in that, it is described according to the structure light image generate threedimensional model, Include:
Demodulate in the structure light image corresponding offset information at distorted position;
The offset information is converted into depth information;
The threedimensional model is generated according to the depth information.
6. according to the method described in claim 4, it is characterized in that, it is described according to the structure light image generate threedimensional model, Include:
Handled to obtain treated structure light image to the structure light image based on Image Pretreatment Algorithm;
The threedimensional model is generated according to treated the structure light image.
7. -6 any method according to claim 1, which is characterized in that the eye characteristic parameter includes that pupil center sits Mark or iris centre coordinate.
8. a kind of Eye-controlling focus device based on structure light characterized by comprising
Structure light image obtains module, when for monitoring that Eye-controlling focus event is triggered, to user face projective structure light, obtains Take the structure light image of family face;
Center coordinate of eye pupil determining module, for determining the eye characteristic parameter of user according to the structure light image;
Coordinates of targets obtains module, for the eye characteristic parameter to be input in sight mapping model trained in advance, obtains Take coordinates of targets corresponding with the eye characteristic parameter, wherein the sight mapping model trained in advance passes through artificial mind Generated through network model according to the training data training of setting sets of numbers, the training data include standard eye characteristic parameter with And corresponding standard target coordinate;
Blinkpunkt determining module, for determining the blinkpunkt of user according to the coordinates of targets.
9. a kind of equipment characterized by comprising
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now Eye-controlling focus method based on structure light as described in any in claim 1-7.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The Eye-controlling focus method based on structure light as described in any in claim 1-7 is realized when execution.
CN201810623544.1A 2018-06-15 2018-06-15 A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light Pending CN108985172A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810623544.1A CN108985172A (en) 2018-06-15 2018-06-15 A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light
PCT/CN2019/089352 WO2019237942A1 (en) 2018-06-15 2019-05-30 Line-of-sight tracking method and apparatus based on structured light, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810623544.1A CN108985172A (en) 2018-06-15 2018-06-15 A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light

Publications (1)

Publication Number Publication Date
CN108985172A true CN108985172A (en) 2018-12-11

Family

ID=64541414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810623544.1A Pending CN108985172A (en) 2018-06-15 2018-06-15 A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light

Country Status (2)

Country Link
CN (1) CN108985172A (en)
WO (1) WO2019237942A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785375A (en) * 2019-02-13 2019-05-21 盎锐(上海)信息科技有限公司 Distance detection method and device based on 3D modeling
CN109886780A (en) * 2019-01-31 2019-06-14 苏州经贸职业技术学院 Commodity object detection method and device based on eye tracking
CN110008835A (en) * 2019-03-05 2019-07-12 成都旷视金智科技有限公司 Sight prediction technique, device, system and readable storage medium storing program for executing
CN110458104A (en) * 2019-08-12 2019-11-15 广州小鹏汽车科技有限公司 The human eye sight direction of human eye sight detection system determines method and system
CN110555426A (en) * 2019-09-11 2019-12-10 北京儒博科技有限公司 Sight line detection method, device, equipment and storage medium
WO2019237942A1 (en) * 2018-06-15 2019-12-19 北京七鑫易维信息技术有限公司 Line-of-sight tracking method and apparatus based on structured light, device, and storage medium
CN110619303A (en) * 2019-09-16 2019-12-27 Oppo广东移动通信有限公司 Method, device and terminal for tracking point of regard and computer readable storage medium
CN111339982A (en) * 2020-03-05 2020-06-26 西北工业大学 Multi-stage pupil center positioning technology implementation method based on features
CN111402480A (en) * 2020-02-29 2020-07-10 深圳壹账通智能科技有限公司 Visitor information management method, device, system, equipment and storage medium
CN111522430A (en) * 2018-12-21 2020-08-11 托比股份公司 Training of gaze tracking models
CN111753168A (en) * 2020-06-23 2020-10-09 广东小天才科技有限公司 Method and device for searching questions, electronic equipment and storage medium
CN112099622A (en) * 2020-08-13 2020-12-18 中国科学院深圳先进技术研究院 Sight tracking method and device
CN112101065A (en) * 2019-06-17 2020-12-18 北京七鑫易维科技有限公司 Laser-based eyeball tracking method and terminal equipment
CN112183160A (en) * 2019-07-04 2021-01-05 北京七鑫易维科技有限公司 Sight estimation method and device
WO2021134710A1 (en) * 2019-12-31 2021-07-08 深圳市大疆创新科技有限公司 Control method and related device
CN113448428A (en) * 2020-03-24 2021-09-28 中移(成都)信息通信科技有限公司 Method, device and equipment for predicting sight focus and computer storage medium
WO2024051476A1 (en) * 2022-09-07 2024-03-14 北京字跳网络技术有限公司 Head-mounted virtual reality device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462184B (en) * 2020-04-02 2022-09-23 桂林电子科技大学 Online sparse prototype tracking method based on twin neural network linear representation model
CN112507840A (en) * 2020-12-02 2021-03-16 中国船舶重工集团公司第七一六研究所 Man-machine hybrid enhanced small target detection and tracking method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030020755A1 (en) * 1997-04-30 2003-01-30 Lemelson Jerome H. System and methods for controlling automatic scrolling of information on a display or screen
CN102129554A (en) * 2011-03-18 2011-07-20 山东大学 Method for controlling password input based on eye-gaze tracking
CN102930252A (en) * 2012-10-26 2013-02-13 广东百泰科技有限公司 Sight tracking method based on neural network head movement compensation
CN103281507A (en) * 2013-05-06 2013-09-04 上海大学 Videophone system and videophone method based on true three-dimensional display
CN104951084A (en) * 2015-07-30 2015-09-30 京东方科技集团股份有限公司 Eye-tracking method and device
CN104951808A (en) * 2015-07-10 2015-09-30 电子科技大学 3D (three-dimensional) sight direction estimation method for robot interaction object detection
CN105931240A (en) * 2016-04-21 2016-09-07 西安交通大学 Three-dimensional depth sensing device and method
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation
CN107451560A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 User's expression recognition method, device and terminal
US20180067550A1 (en) * 2016-07-29 2018-03-08 International Business Machines Corporation System, method, and recording medium for tracking gaze with respect to a moving plane with a camera with respect to the moving plane

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356163B (en) * 2013-07-08 2016-03-30 东北电力大学 Based on fixation point measuring device and the method thereof of video image and artificial neural network
US10016130B2 (en) * 2015-09-04 2018-07-10 University Of Massachusetts Eye tracker system and methods for detecting eye parameters
US9983709B2 (en) * 2015-11-02 2018-05-29 Oculus Vr, Llc Eye tracking using structured light
CN107797664B (en) * 2017-10-27 2021-05-07 Oppo广东移动通信有限公司 Content display method and device and electronic device
CN108985172A (en) * 2018-06-15 2018-12-11 北京七鑫易维信息技术有限公司 A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030020755A1 (en) * 1997-04-30 2003-01-30 Lemelson Jerome H. System and methods for controlling automatic scrolling of information on a display or screen
CN102129554A (en) * 2011-03-18 2011-07-20 山东大学 Method for controlling password input based on eye-gaze tracking
CN102930252A (en) * 2012-10-26 2013-02-13 广东百泰科技有限公司 Sight tracking method based on neural network head movement compensation
CN103281507A (en) * 2013-05-06 2013-09-04 上海大学 Videophone system and videophone method based on true three-dimensional display
CN104951808A (en) * 2015-07-10 2015-09-30 电子科技大学 3D (three-dimensional) sight direction estimation method for robot interaction object detection
CN104951084A (en) * 2015-07-30 2015-09-30 京东方科技集团股份有限公司 Eye-tracking method and device
CN105931240A (en) * 2016-04-21 2016-09-07 西安交通大学 Three-dimensional depth sensing device and method
US20180067550A1 (en) * 2016-07-29 2018-03-08 International Business Machines Corporation System, method, and recording medium for tracking gaze with respect to a moving plane with a camera with respect to the moving plane
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation
CN107451560A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 User's expression recognition method, device and terminal

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
LILING YU ET AL: "Eye-gaze tracking system based on particle swarm optimization and BP neural network", 《2016 12TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA)》 *
S.M.KIM ET AL: "Non-intrusive eye gaze tracking under natural head movements", 《THE 26TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY》 *
WEI WEN ET AL: "The Android-Based Acquisition and CNN-Based Analysis for Gaze Estimation in Eye Tracking", 《CHINESE CONFERENCE ON BIOMETRIC RECOGNITION》 *
刘瑞安: "单摄像机视线跟踪技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *
张鹏翼等: "一种非接触式实时视线追踪系统的设计", 《北京科技大学学报》 *
杨文等: "虹膜定位的快速算法", 《计算机工程与应用》 *
金纯等: "视线追踪系统中注视点估计方法研究综述", 《自动化仪表》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237942A1 (en) * 2018-06-15 2019-12-19 北京七鑫易维信息技术有限公司 Line-of-sight tracking method and apparatus based on structured light, device, and storage medium
CN111522430A (en) * 2018-12-21 2020-08-11 托比股份公司 Training of gaze tracking models
CN111522430B (en) * 2018-12-21 2023-11-07 托比股份公司 Training of gaze tracking models
CN109886780A (en) * 2019-01-31 2019-06-14 苏州经贸职业技术学院 Commodity object detection method and device based on eye tracking
CN109886780B (en) * 2019-01-31 2022-04-08 苏州经贸职业技术学院 Commodity target detection method and device based on eyeball tracking
CN109785375A (en) * 2019-02-13 2019-05-21 盎锐(上海)信息科技有限公司 Distance detection method and device based on 3D modeling
CN110008835B (en) * 2019-03-05 2021-07-09 成都旷视金智科技有限公司 Sight line prediction method, device, system and readable storage medium
CN110008835A (en) * 2019-03-05 2019-07-12 成都旷视金智科技有限公司 Sight prediction technique, device, system and readable storage medium storing program for executing
CN112101065A (en) * 2019-06-17 2020-12-18 北京七鑫易维科技有限公司 Laser-based eyeball tracking method and terminal equipment
CN112183160A (en) * 2019-07-04 2021-01-05 北京七鑫易维科技有限公司 Sight estimation method and device
CN110458104A (en) * 2019-08-12 2019-11-15 广州小鹏汽车科技有限公司 The human eye sight direction of human eye sight detection system determines method and system
CN110555426A (en) * 2019-09-11 2019-12-10 北京儒博科技有限公司 Sight line detection method, device, equipment and storage medium
CN110619303A (en) * 2019-09-16 2019-12-27 Oppo广东移动通信有限公司 Method, device and terminal for tracking point of regard and computer readable storage medium
WO2021134710A1 (en) * 2019-12-31 2021-07-08 深圳市大疆创新科技有限公司 Control method and related device
CN111402480A (en) * 2020-02-29 2020-07-10 深圳壹账通智能科技有限公司 Visitor information management method, device, system, equipment and storage medium
CN111339982A (en) * 2020-03-05 2020-06-26 西北工业大学 Multi-stage pupil center positioning technology implementation method based on features
CN113448428A (en) * 2020-03-24 2021-09-28 中移(成都)信息通信科技有限公司 Method, device and equipment for predicting sight focus and computer storage medium
CN113448428B (en) * 2020-03-24 2023-04-25 中移(成都)信息通信科技有限公司 Sight focal point prediction method, device, equipment and computer storage medium
CN111753168A (en) * 2020-06-23 2020-10-09 广东小天才科技有限公司 Method and device for searching questions, electronic equipment and storage medium
CN112099622A (en) * 2020-08-13 2020-12-18 中国科学院深圳先进技术研究院 Sight tracking method and device
CN112099622B (en) * 2020-08-13 2022-02-01 中国科学院深圳先进技术研究院 Sight tracking method and device
WO2022032911A1 (en) * 2020-08-13 2022-02-17 中国科学院深圳先进技术研究院 Gaze tracking method and apparatus
WO2024051476A1 (en) * 2022-09-07 2024-03-14 北京字跳网络技术有限公司 Head-mounted virtual reality device

Also Published As

Publication number Publication date
WO2019237942A1 (en) 2019-12-19

Similar Documents

Publication Publication Date Title
CN108985172A (en) A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light
CN103176607B (en) A kind of eye-controlled mouse realization method and system
CN105913487B (en) One kind is based on the matched direction of visual lines computational methods of iris edge analysis in eye image
CN104598915B (en) A kind of gesture identification method and device
CN102520796B (en) Sight tracking method based on stepwise regression analysis mapping model
CN106796449A (en) Eye-controlling focus method and device
Kelly et al. Recalibration of perceived distance in virtual environments occurs rapidly and transfers asymmetrically across scale
CN103356163B (en) Based on fixation point measuring device and the method thereof of video image and artificial neural network
Parks et al. Augmented saliency model using automatic 3d head pose detection and learned gaze following in natural scenes
CN112232310B (en) Face recognition system and method for expression capture
CN103324284A (en) Mouse control method based on face and eye detection
CN106326880A (en) Pupil center point positioning method
Hsu et al. A novel eye center localization method for multiview faces
CN110555426A (en) Sight line detection method, device, equipment and storage medium
McNamara et al. Perception in graphics, visualization, virtual environments and animation
Tayibnapis et al. Driver's gaze zone estimation by transfer learning
CN111209811B (en) Method and system for detecting eyeball attention position in real time
Zhang Application of intelligent virtual reality technology in college art creation and design teaching
CN103761011B (en) A kind of method of virtual touch screen, system and the equipment of calculating
Sidhu et al. Deep learning based emotion detection in an online class
CN107146211A (en) Retinal vascular images noise-reduction method based on line spread function and bilateral filtering
McNamara et al. Perceptually-motivated graphics, visualization and 3D displays
Varkarakis et al. A deep learning approach to segmentation of distorted iris regions in head-mounted displays
Xu et al. Wayfinding design in transportation architecture–are saliency models or designer visual attention a good predictor of passenger visual attention?
Guo System analysis of the learning behavior recognition system for students in a law classroom: based on the improved SSD behavior recognition algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181211