CN109901716A - Sight line point prediction model method for building up, device and sight line point prediction technique - Google Patents

Sight line point prediction model method for building up, device and sight line point prediction technique Download PDF

Info

Publication number
CN109901716A
CN109901716A CN201910159483.2A CN201910159483A CN109901716A CN 109901716 A CN109901716 A CN 109901716A CN 201910159483 A CN201910159483 A CN 201910159483A CN 109901716 A CN109901716 A CN 109901716A
Authority
CN
China
Prior art keywords
eye
initial pictures
face
sight line
line point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910159483.2A
Other languages
Chinese (zh)
Other versions
CN109901716B (en
Inventor
林煜
曾光
余清洲
许清泉
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN201910159483.2A priority Critical patent/CN109901716B/en
Publication of CN109901716A publication Critical patent/CN109901716A/en
Application granted granted Critical
Publication of CN109901716B publication Critical patent/CN109901716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Sight line point prediction model method for building up provided by the invention, device and sight line point prediction technique, it is related to human eye sight point prediction technical field, method include: obtain camera take multiple include the initial pictures of face, and the corresponding position coordinates over the display of human eye sight point in every initial pictures of acquisition, every initial pictures are respectively processed to obtain including eyes image, the sample data of eye parameter and face parameter, the corresponding position coordinates over the display of human eye sight point in the sample data and every initial pictures handled based on every initial pictures carry out depth e-learning to establish sight line point prediction model.By the above method, human eye sight point can be carried out quickly, reliably to predict.

Description

Sight line point prediction model method for building up, device and sight line point prediction technique
Technical field
The present invention relates to human eye sight point prediction technical fields, establish in particular to a kind of sight line point prediction model Method, apparatus and sight line point prediction technique.
Background technique
Currently, human eye sight point prediction is mainly applied first-class a variety of additional with RF transmitter and depth camera On the terminal device of hardware, to obtain facial image by depth camera to estimate direction of visual lines and based on infrared ray transmitting Device calculates distance, so that obtaining sight falls region on the terminal device.
Inventor it has been investigated that, existing sight line point prediction technique realizes that process is complex, that is, needs infrared ray to send out The first-class a variety of additional firmwares of emitter, depth camera are supported and prediction result is usually not accurate enough.Therefore it provides one kind can be accurate And the method for rapidly carrying out human eye sight point prediction is a technical problem to be solved urgently.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of sight line point prediction model method for building up, device and sight line points Prediction technique, to promote the efficiency and accuracy of human eye sight point prediction.
To achieve the above object, the embodiment of the present invention adopts the following technical scheme that
A kind of sight line point prediction model method for building up, applied to the processor of terminal device, the terminal device further includes Camera and display, which comprises
The initial pictures that multiple that the camera takes include face are obtained, and obtain every initial pictures In the corresponding position coordinates on the display of human eye sight point;
Initial pictures described in every are respectively processed to obtain including eyes image, eye parameter and face parameter Sample data;
Human eye sight in the sample data handled based on initial pictures described in every and every initial pictures The corresponding position coordinates on the display of point carry out depth e-learning to establish sight line point prediction model.
Optionally, in above-mentioned sight line point prediction model method for building up, initial pictures described in every are respectively processed The sample data for obtaining including the steps that eyes image, eye parameter and face parameter includes:
Face datection is carried out to initial pictures described in every and obtains facial image, the face in the facial image are carried out Positioning obtains human face five-sense-organ frame, and eye figure is obtained from the facial image based on the eye frame in the human face five-sense-organ frame;
First accounting coefficient of the eye figure in the initial pictures, the facial image are obtained in the initial pictures The second accounting coefficient and the facial image in becoming a full member for face angle and become a full member scale, by the first accounting coefficient, Second accounting coefficient, angle of becoming a full member and scale of becoming a full member are as face parameter;
Eyes coordinate data of the eyes in the eye figure in the initial pictures is obtained, which is made For eye parameter, and using the eye figure, face parameter and eye parameter as sample data.
Optionally, in above-mentioned sight line point prediction model method for building up, the eyes coordinate data includes left eye coordinates number According to right eye coordinate data, the step of eyes coordinate data of the eyes obtained in the eye figure in the initial pictures Include:
Upper eyelid, palpebra inferior, left eye angle and the right eye angle of the left eye in the eye figure are obtained in the initial pictures Position coordinates and seek mean value and obtain left eye coordinates data, obtain upper eyelid, palpebra inferior, the left side of the right eye in the eye figure Position coordinates in the initial pictures of canthus and right eye angle simultaneously seek mean value and obtain right eye coordinate data.
Optionally, in above-mentioned sight line point prediction model method for building up, becoming a full member for the face in the facial image is obtained Angle and become a full member scale the step of include:
The abscissa difference and ordinate difference of right and left eyes are obtained according to the left eye coordinates data and right eye coordinate data;
Angle and the scale of becoming a full member of becoming a full member is obtained according to the abscissa difference and Diff N value.
Optionally, it in above-mentioned sight line point prediction model method for building up, is handled based on initial pictures described in every The corresponding position coordinates on the display of human eye sight point in sample data and every initial pictures carry out depth E-learning includes: the step of sight line point prediction model to establish
The human eye in the sample data and every initial pictures handled based on initial pictures described in every The corresponding position coordinates on the display of sight line point use pytorch frame to be trained to establish sight point prediction mould Type.
The application also provides a kind of sight line point prediction technique, and applied to the processor of terminal device, the terminal device is also Including camera and display, the view established according to above-mentioned sight line point prediction model method for building up is stored in the processor Line point prediction model, the sight line point prediction technique include:
Obtain image to be detected including face that the camera takes;
The image to be detected is handled to obtain the number to be measured including eyes image, eye parameter and face parameter According to;
The testing data is predicted to obtain the people in described image to be detected using the sight line point prediction model The corresponding target location coordinate on the display of an eye line point.
Optionally, it in above-mentioned sight line point prediction technique, is executing using the sight line point prediction model to described to be measured Data are predicted to obtain the corresponding target location coordinate on the display of the human eye sight point in described image to be detected The step of after, the method also includes:
It is formed according to the resolution ratio of the display on the display interface of the display with the target location coordinate Focus frame centered on corresponding pixel, to be handled based on the focus frame.
The application also provides a kind of sight line point prediction model and establishes device, described applied to the processor in terminal device Terminal device further includes camera and display, and described device includes:
Image obtains module, includes the initial pictures of face for obtaining multiple that the camera takes, and obtain Obtain the corresponding position coordinates on the display of human eye sight point in every initial pictures;
Sample obtains module, for being respectively processed to obtain including eyes image, eye to initial pictures described in every The sample data of parameter and face parameter;
Prediction model obtains module, described in sample data for being handled based on initial pictures described in every and every The corresponding position coordinates on the display of human eye sight point in initial pictures carry out depth e-learning to establish sight Point prediction model.
Optionally, it is established in device in above-mentioned sight line point prediction model, sample obtains module and includes:
Positioning submodule is detected, facial image is obtained for carrying out Face datection to initial pictures described in every, to described Face in facial image are positioned to obtain human face five-sense-organ frame, and based on the eye frame in the human face five-sense-organ frame from the people Eye figure is obtained in face image;
Face gain of parameter submodule, for obtaining first accounting coefficient of the eye figure in the initial pictures, institute State facial image in the second accounting coefficient and the facial image in the initial pictures becoming a full member for face angle and become a full member Scale, using the first accounting coefficient, the second accounting coefficient, angle of becoming a full member and scale of becoming a full member as face parameter;
Sample data obtains submodule, for obtaining eyes coordinate of the eyes in the eye figure in the initial pictures Data, using the eyes coordinate data as eye parameter, and using the eye figure, face parameter and eye parameter as sample Data.
Optionally, it is established in device in above-mentioned sight line point prediction model, the prediction model obtains module, is also used to be based on Human eye sight point in the sample data and every initial pictures that every initial pictures are handled corresponds to Position coordinates on the display use pytorch frame to be trained to establish sight line point prediction model.
Sight line point prediction model method for building up, device and sight line point prediction technique provided by the invention, by being imaged Head take multiple include the initial pictures of face, and obtain every initial pictures in human eye sight point correspondence showing Position coordinates on device are respectively processed to obtain and join including eyes image, eye parameter and face to every initial pictures Several sample datas, the human eye sight point pair in the sample data handled based on every initial pictures and every initial pictures Position coordinates over the display are answered to carry out depth e-learning to establish sight line point prediction model, to use above-mentioned prediction When model carries out sight point prediction, human eye sight point can be carried out quickly, reliably to predict.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
Fig. 1 is the connection block diagram of terminal device provided in an embodiment of the present invention.
Fig. 2 is the flow diagram of sight line point prediction model method for building up provided in an embodiment of the present invention.
Fig. 3 is the flow diagram of step S120 in Fig. 2.
Fig. 4 is the flow diagram of sight line point prediction technique provided in an embodiment of the present invention.
Fig. 5 is the connection block diagram that sight line point prediction model provided in an embodiment of the present invention establishes device.
Fig. 6 is the connection block diagram that sample provided in an embodiment of the present invention obtains module.
Icon: 10- terminal device;12- memory;14- processor;16- camera;18- display;100- sight line point is pre- Survey model foundation device;110- image obtains module;120- sample obtains module;122- detects positioning submodule;124- eye Data obtain submodule;126- face gain of parameter submodule;128- sample obtains submodule;130- prediction model obtains mould Block.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment only It is a part of the embodiments of the present invention, instead of all the embodiments.The present invention being usually described and illustrated herein in the accompanying drawings The component of embodiment can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiments of the present invention, this field is common Technical staff's every other embodiment obtained without creative efforts belongs to the model that the present invention protects It encloses.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
In the description of the present invention unless specifically defined or limited otherwise, term " setting ", " connected ", " connection " are answered It is interpreted broadly, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can be mechanical connect It connects, is also possible to be electrically connected;It can be directly connected, can also can be in two elements indirectly connected through an intermediary The connection in portion.For the ordinary skill in the art, the tool of above-mentioned term in the present invention can be understood with concrete condition Body meaning.
Referring to Fig. 1, a kind of terminal device 10 provided by the invention, which can be mobile phone, computer, plate Computer etc. have Image Acquisition, image is shown and the equipment of data processing function, be not specifically limited herein.The terminal is set Standby 10 include memory 12, processor 14, camera 16 and display 18.
The memory 12, processor 14, camera 16 and display 18 directly or indirectly electrically connect between any two It connects, to realize the transmission or interaction of data.For example, these elements can pass through one or more communication bus or signal between each other Line, which is realized, to be electrically connected.It is stored in memory 12 and is stored in the memory 12 in the form of software or firmware (Firmware) In software function module, the processor 14 by the operation software program and module that are stored in memory 12, such as this Sight line point prediction model in inventive embodiments establishes device 100, thereby executing various function application and data processing, i.e., in fact Sight line point prediction model method for building up and sight line point prediction technique in the existing embodiment of the present invention.
The memory 12 may be, but not limited to, random access memory (Random Access Memory, RAM), Read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), Electricallyerasable ROM (EEROM) (Electrically Erasable Programmable Read-Only Memory, EEPROM) etc..Wherein, memory 12 is for storing program, and the processor 14 executes the journey after receiving and executing instruction Sequence.
The processor 14 may be a kind of IC chip, the processing capacity with signal.Above-mentioned processor 14 It can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc..It can also be digital signal processor 14 (DSP), specific integrated circuit (ASIC), show Field programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware Component.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor can To be that microprocessor or the processor are also possible to any conventional processor etc..
It please join Fig. 2, the present invention provides a kind of sight line point prediction model method for building up, can be applied to above-mentioned terminal device 10 In processor 14, the method includes the steps tri- steps of S110-S130.
Step S110: obtaining multiple that the camera 16 takes includes the initial pictures of face, and obtains every The corresponding position coordinates on the display 18 of human eye sight point in the initial pictures.
Wherein, it is corresponding in institute to can be human eye sight point for the corresponding position coordinates on the display 18 of human eye sight point The pixel coordinate for stating display 18, the seat being also possible in the coordinate system that the display interface based on the display 18 is established Mark, is not specifically limited herein.It is appreciated that when the corresponding position coordinates on the display 18 of the human eye sight point are When coordinate in the coordinate system that display interface based on the display 18 is established, which be can be based on display 18 One fixed point, such as with the vertex in the lower left corner, center point, bottom right angular vertex, bottom right angular vertex or lower-left angular vertex be original Point, and established using the length direction of display 18 and width direction as transverse and longitudinal coordinate axis.
It is appreciated that multiple described initial pictures can be watched attentively by the collected different user of the camera 16 it is described The face image when different location of display 18, and include eyes part in face image.
Step S120: to initial pictures described in every be respectively processed to obtain including eyes image, eye parameter and The sample data of face parameter.
Wherein, the mode for being handled to obtain eyes image, eye parameter and face parameter to the initial pictures can To be, facial image is obtained using Face datection or recognition of face positioning to described image, and face is obtained based on facial image Parameter.The face parameter can be but not limited to become a full member parameter, face of face and become a full member scale, face area in initial graph image planes The accounting of accounting and/or human face five-sense-organ image in initial pictures in product, the eye parameter may include, but be not limited to double Eye figure of the eye in the position coordinates and/or face figure in the initial pictures is in initial pictures or accounting in facial image Than.
In the present embodiment, the step S120 includes:
Step S122: Face datection is carried out to initial pictures described in every and obtains facial image, in the facial image Face positioned to obtain human face five-sense-organ frame, and obtained from the facial image based on the eye frame in the human face five-sense-organ frame Obtain eye figure.
In the present embodiment, to be convenient for subsequent processing, the size of each eye figure of acquisition is identical.I.e. above-mentioned step In rapid S122, eye figure is obtained from the facial image based on the eye frame in the human face five-sense-organ frame specifically: be based on institute It states the eye frame in human face five-sense-organ frame and obtains the eye figure having a size of a setting value from the facial image.
Step S124: eyes coordinate data of the eyes in the eye figure in the initial pictures is obtained, by the eyes Coordinate data is as eye parameter.
Wherein, above-mentioned steps specifically can be, and obtain the canthus coordinate data of the eyes in eye figure as eye parameter, It is also possible to obtain the center position coordinates of the eyes in eye figure as eye parameter, is not specifically limited herein, according to reality Border demand is configured.
In the present embodiment, the coordinate data of eyes includes left eye coordinates data and right eye coordinate data, above-mentioned steps S124 specifically:
Upper eyelid, palpebra inferior, left eye angle and the right eye angle of the left eye in the eye figure are obtained in the initial pictures Position coordinates and seek mean value and obtain left eye coordinates data, obtain upper eyelid, palpebra inferior, the left side of the right eye in the eye figure Position coordinates in the initial pictures of canthus and right eye angle simultaneously seek mean value and obtain right eye coordinate data.
Step S126: first accounting coefficient of the eye figure in the initial pictures, the facial image are obtained at this Become a full member angle and the scale of becoming a full member of face in the second accounting coefficient and the facial image in initial pictures, by described first Accounting coefficient, the second accounting coefficient, angle of becoming a full member and scale of becoming a full member are as face parameter.
Wherein, it is described become a full member angle and become a full member scale can coordinate based on the cheek in facial image, eyes coordinate And/or eyebrow coordinate obtains.
In the present embodiment, the step S126 includes:
The abscissa difference and ordinate difference of right and left eyes are obtained according to the left eye coordinates data and right eye coordinate data.
Angle and the scale of becoming a full member of becoming a full member is obtained according to the abscissa difference and Diff N value.
Wherein, it may is that according to the concrete mode that the abscissa difference and Diff N value obtain the angle of becoming a full member Angle of becoming a full member is obtained using atan2 function, abscissa difference and Diff N value.According to the abscissa difference and Diff N The concrete mode that value obtains the scale of becoming a full member may is that square square with ordinate difference for obtaining the abscissa difference The sum of after extracted square root to obtain an evolution value, and using a constant (: 100) obtained divided by the evolution value as described in become a full member ruler Degree.
Step S128: using the eye figure, face parameter and eye parameter as sample data.
Step S130: in the sample data handled based on initial pictures described in every and every initial pictures The corresponding position coordinates on the display 18 of human eye sight point carry out depth e-learning to establish sight line point prediction model.
Wherein, above-mentioned steps S130, which specifically can be, is divided into the corresponding sample data of multiple initial pictures and position coordinates Every group of sample data and corresponding position coordinates are successively carried out depth e-learning with batch by multiple groups.
In the present embodiment, above-mentioned steps S130 includes: the sample handled based on initial pictures described in every The corresponding position coordinates on the display 18 of human eye sight point in data and every initial pictures use pytorch Frame is trained to establish sight line point prediction model.
By above-mentioned setting, establish sight line point prediction model to realize, and using the sight line point prediction model obtained into The prediction result obtained when pedestrian's an eye line point prediction is more accurate, and avoids in the prior art, is carrying out human eye sight point When prediction, existing hardware cost is excessively high when infrared transmitter and depth camera being needed to carry out the detection of human eye sight point, time-consuming Too long problem.
Incorporated by reference to Fig. 4, on the basis of the above, the application also goes out a kind of sight line point prediction technique, the sight point prediction side Method is applied to above-mentioned terminal device 10, is stored in the processor 14 in the terminal device 10 according to above-mentioned sight point prediction The sight line point prediction model that method for establishing model is established, the sight line point prediction technique include:
Step S210: image to be detected including face that the camera 16 takes is obtained.
Step S220: the image to be detected is handled to obtain including eyes image, eye parameter and face parameter Testing data.
Wherein, the above-mentioned mode for handling image to be detected and initial pictures are handled to obtain sample data Mode is close, is referred to above to the specific descriptions of step S120, herein accordingly, with respect to the specific descriptions of above-mentioned steps S220 It does not repeat one by one.
Step S230: the testing data is predicted to obtain the mapping to be checked using the sight line point prediction model The corresponding target location coordinate on the display 18 of human eye sight point as in.
Under normal conditions, when needing to carry out eye control operation to the terminal device 10, eye control operation is carried out to realize Accuracy, in the present embodiment, after executing step S130, the method also includes:
It is formed according to the resolution ratio of the display 18 on the display interface of the display 18 with the target position Focus frame centered on the corresponding pixel of coordinate, to be handled based on the focus frame.
Wherein, carrying out processing based on the focus frame can be, and prestore at different set position in the processor 14 Corresponding different operation mode, and mode of operation may include, but be not limited to carry out the operation such as page turning, selection, based on described poly- Burnt frame carry out processing can be based on focus frame be sitting in region it is no include the setting position, and include setting position when, root It is handled according to the corresponding mode of operation in the setting position to realize page turning and selection and other effects.
Incorporated by reference to Fig. 5, on the basis of the above, the present invention also provides a kind of processing that can be applied in above-mentioned terminal device 10 The sight line point prediction model of device 14 establishes device 100, and it includes that image obtains module that the sight line point prediction model, which establishes device 100, 110, sample obtains module 120 and prediction model obtains module 130.
Described image obtains module 110, includes the initial graph of face for obtaining multiple that the camera 16 takes Picture, and obtain the corresponding position coordinates on the display 18 of human eye sight point in every initial pictures.At this In embodiment, described image, which obtains module 110, can be used for executing step S110 shown in Fig. 2, obtain module about described image 110 specific descriptions are referred to the description to step S110 above.
The sample obtains module 120, for being respectively processed to obtain including eye figure to initial pictures described in every The sample data of picture, eye parameter and face parameter.In the present embodiment, the sample obtains module 120 and can be used for executing Step S120 shown in Fig. 2, the specific descriptions for obtaining module 120 about the sample are referred to above retouch step S120 It states.
Incorporated by reference to Fig. 6, in the present embodiment, it includes detection positioning submodule 122, eye that the sample, which obtains module 120, Data obtain submodule 124, face gain of parameter submodule 126 and sample and obtain submodule 128.
The detection positioning submodule 122 obtains facial image for carrying out Face datection to initial pictures described in every, Positioned to obtain human face five-sense-organ frame to the face in the facial image, and based on the eye frame in the human face five-sense-organ frame from Eye figure is obtained in the facial image.In the present embodiment, the detection positioning submodule 122 can be used for executing shown in Fig. 3 Step S122, about it is described detection positioning submodule 122 specific descriptions be referred to the description to step S122 above.
The optical data obtains submodule 124, for obtaining the eyes in the eye figure in the initial pictures Eyes coordinate data, using the eyes coordinate data as eye parameter.In the present embodiment, the optical data obtains submodule 124 can be used for executing step S124 shown in Fig. 3, and the specific descriptions for obtaining submodule 124 about the optical data can join According to the description above to step S124.
The face gain of parameter submodule 126, for obtaining first accounting of the eye figure in the initial pictures The angle of becoming a full member of coefficient, the facial image face in the second accounting coefficient and the facial image in the initial pictures With scale of becoming a full member, using the first accounting coefficient, the second accounting coefficient, angle of becoming a full member and scale of becoming a full member as face parameter. In the present embodiment, the face gain of parameter submodule 126 can be used for executing step S126 shown in Fig. 3, about the people The specific descriptions of face gain of parameter submodule 126 are referred to the description to step S126 above.
The sample obtains submodule 128, for using the eye figure, face parameter and eye parameter as sample number According to.In the present embodiment, the sample, which obtains submodule 128, can be used for executing step S128 shown in Fig. 3, about the sample The specific descriptions for obtaining submodule 128 are referred to the description to step S128 above.
The prediction model obtains module 130, sample data for being handled based on initial pictures described in every and The corresponding position coordinates on the display 18 of human eye sight point in every initial pictures carry out depth e-learning To establish sight line point prediction model.In the present embodiment, the prediction model acquisition module 130 can be used for executing shown in Fig. 2 Step S130, the specific descriptions for obtaining module 130 about the prediction model are referred to the description to step S130 above.
In the present embodiment, the prediction model obtains module 130, is also used to handle based on every initial pictures To the sample data and every initial pictures in the corresponding position on the display 18 of human eye sight point sit Mark uses pytorch frame to be trained to establish sight line point prediction model.
To sum up, sight line point prediction model method for building up provided by the invention, device and sight line point prediction technique, method pass through Obtain camera 16 take multiple include the initial pictures of face, and obtain every initial pictures in human eye sight point Corresponding position coordinates on display 18 are respectively processed to obtain including eyes image, eye ginseng to every initial pictures Several and face parameter sample data, in the sample data handled based on every initial pictures and every initial pictures The corresponding position coordinates on display 18 of human eye sight point carry out depth e-learning to establish sight line point prediction model, with The prediction result obtained when carrying out human eye sight point prediction using the sight line point prediction model of acquisition is more accurate, and avoids existing Have in technology, when carrying out human eye sight point prediction, infrared transmitter and the first-class hardware device of depth camera is needed to carry out human eye Existing hardware cost is excessively high when sight line point detects, the problem of taking long time.
In several embodiments provided by the embodiment of the present invention, it should be understood that disclosed device and method, it can also To realize by another way.Device and method embodiment described above is only schematical, for example, in attached drawing Flow chart and block diagram show that the devices of multiple embodiments according to the present invention, method and computer program product are able to achieve Architecture, function and operation.In this regard, each box in flowchart or block diagram can represent module, a program A part of section or code, a part of the module, section or code include that one or more is patrolled for realizing defined Collect the executable instruction of function.It should also be noted that in some implementations as replacement, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation together Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, terminal device 10 or the network equipment etc.) execute all or part of step of each embodiment the method for the present invention Suddenly.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), deposits at random Various Jie that can store program code such as access to memory 12 (RAM, Random Access Memory), magnetic or disk Matter.It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to nonexcludability Include so that include a series of elements process, method, article or equipment not only include those elements, but also Including other elements that are not explicitly listed, or further include for this process, method, article or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method, article or equipment of element.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of sight line point prediction model method for building up, applied to the processor of terminal device, the terminal device further includes taking the photograph As head and display, which is characterized in that the described method includes:
The initial pictures that multiple that the camera takes include face are obtained, and are obtained in every initial pictures The corresponding position coordinates on the display of human eye sight point;
It is respectively processed to obtain the sample including eyes image, eye parameter and face parameter to initial pictures described in every Data;
Human eye sight point pair in the sample data handled based on initial pictures described in every and every initial pictures Position coordinates on the display are answered to carry out depth e-learning to establish sight line point prediction model.
2. sight line point prediction model method for building up according to claim 1, which is characterized in that initial pictures described in every Be respectively processed to obtain the sample data for including the steps that eyes image, eye parameter and face parameter include:
Face datection is carried out to initial pictures described in every and obtains facial image, the face in the facial image are positioned Human face five-sense-organ frame is obtained, and eye figure is obtained from the facial image based on the eye frame in the human face five-sense-organ frame;
Eyes coordinate data of the eyes in the eye figure in the initial pictures is obtained, using the eyes coordinate data as eye Portion's parameter;
Obtain of first accounting coefficient, the facial image of the eye figure in the initial pictures in the initial pictures Become a full member angle and the scale of becoming a full member of face in two accounting coefficients and the facial image, by the first accounting coefficient, second Accounting coefficient, angle of becoming a full member and scale of becoming a full member are as face parameter;
Using the eye figure, face parameter and eye parameter as sample data.
3. sight line point prediction model method for building up according to claim 2, which is characterized in that the eyes coordinate data packet Left eye coordinates data and right eye coordinate data are included, eyes of the eyes obtained in the eye figure in the initial pictures are sat Mark data the step of include:
Obtain the position of the upper eyelid, palpebra inferior, left eye angle and right eye angle of the left eye in the eye figure in the initial pictures It sets coordinate and seeks mean value and obtain left eye coordinates data, obtain upper eyelid, palpebra inferior, the left eye angle of the right eye in the eye figure And it position coordinates of the right eye angle in the initial pictures and seeks mean value and obtains right eye coordinate data.
4. sight line point prediction model method for building up according to claim 3, which is characterized in that obtain in the facial image Face become a full member angle and become a full member scale the step of include:
The abscissa difference and ordinate difference of right and left eyes are obtained according to the left eye coordinates data and right eye coordinate data;
Angle and the scale of becoming a full member of becoming a full member is obtained according to the abscissa difference and Diff N value.
5. sight line point prediction model method for building up according to claim 1, which is characterized in that be based on every initial graph The corresponding position on the display of the human eye sight point in sample data and every initial pictures obtained as processing Coordinate carries out depth e-learning to establish
The human eye sight in the sample data and every initial pictures handled based on initial pictures described in every The corresponding position coordinates on the display of point use pytorch frame to be trained to establish sight line point prediction model.
6. a kind of sight line point prediction technique, applied to the processor of terminal device, the terminal device further includes camera and shows Show device, which is characterized in that -5 any one sight line point prediction model foundation side according to claim 1 is stored in the processor The sight line point prediction model that method is established, the sight line point prediction technique include:
Obtain image to be detected including face that the camera takes;
The image to be detected is handled to obtain the testing data including eyes image, eye parameter and face parameter;
Predicted to obtain the view of the human eye in described image to be detected to the testing data using the sight line point prediction model The corresponding target location coordinate on the display of line point.
7. sight line point prediction technique according to claim 6, which is characterized in that use the sight point prediction mould executing Type is predicted to obtain the human eye sight point in described image to be detected to the testing data and is corresponded on the display After the step of target location coordinate, the method also includes:
It is formed on the display interface of the display according to the resolution ratio of the display corresponding with the target location coordinate Pixel centered on focus frame, to be handled based on the focus frame.
8. a kind of sight line point prediction model establishes device, applied to the processor in terminal device, which is characterized in that the terminal Equipment further includes camera and display, and described device includes:
Image obtains module, for obtaining the initial pictures that multiple that the camera takes include face, and obtains every The corresponding position coordinates on the display of human eye sight point in Zhang Suoshu initial pictures;
Sample obtains module, for being respectively processed to obtain including eyes image, eye parameter to initial pictures described in every And the sample data of face parameter;
Prediction model obtains module, sample data for being handled based on initial pictures described in every and every it is described initial It is pre- to establish sight line point that the corresponding position coordinates on the display of human eye sight point in image carry out depth e-learning Survey model.
9. sight line point prediction model according to claim 8 establishes device, which is characterized in that sample obtains module and includes:
Positioning submodule is detected, facial image is obtained for carrying out Face datection to initial pictures described in every, to the face Face in image are positioned to obtain human face five-sense-organ frame, and based on the eye frame in the human face five-sense-organ frame from the face figure Eye figure is obtained as in;
Optical data obtains submodule, for obtaining eyes number of coordinates of the eyes in the eye figure in the initial pictures According to using the eyes coordinate data as eye parameter;
Face gain of parameter submodule, for obtaining first accounting coefficient of the eye figure in the initial pictures, the people Become a full member angle and the scale of becoming a full member of face image face in the second accounting coefficient and the facial image in the initial pictures, Using the first accounting coefficient, the second accounting coefficient, angle of becoming a full member and scale of becoming a full member as face parameter;
Sample obtains submodule, for using the eye figure, face parameter and eye parameter as sample data.
10. sight line point prediction model according to claim 8 establishes device, which is characterized in that the prediction model obtains Module, the people in the sample data for being also used to handle based on initial pictures described in every and every initial pictures The corresponding position coordinates on the display of an eye line point use pytorch frame to be trained to establish sight point prediction mould Type.
CN201910159483.2A 2019-03-04 2019-03-04 Sight point prediction model establishing method and device and sight point prediction method Active CN109901716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910159483.2A CN109901716B (en) 2019-03-04 2019-03-04 Sight point prediction model establishing method and device and sight point prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910159483.2A CN109901716B (en) 2019-03-04 2019-03-04 Sight point prediction model establishing method and device and sight point prediction method

Publications (2)

Publication Number Publication Date
CN109901716A true CN109901716A (en) 2019-06-18
CN109901716B CN109901716B (en) 2022-08-26

Family

ID=66946275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910159483.2A Active CN109901716B (en) 2019-03-04 2019-03-04 Sight point prediction model establishing method and device and sight point prediction method

Country Status (1)

Country Link
CN (1) CN109901716B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021082636A1 (en) * 2019-10-29 2021-05-06 深圳云天励飞技术股份有限公司 Region of interest detection method and apparatus, readable storage medium and terminal device
CN116030512A (en) * 2022-08-04 2023-04-28 荣耀终端有限公司 Gaze point detection method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793720A (en) * 2014-02-12 2014-05-14 北京海鑫科金高科技股份有限公司 Method and system for positioning eyes
KR20180014317A (en) * 2016-07-29 2018-02-08 씨티아이코리아 주식회사 A face certifying method with eye tracking using Haar-Like-Feature
CN108171152A (en) * 2017-12-26 2018-06-15 深圳大学 Deep learning human eye sight estimation method, equipment, system and readable storage medium storing program for executing
CN108268850A (en) * 2018-01-24 2018-07-10 成都鼎智汇科技有限公司 A kind of big data processing method based on image
CN108681699A (en) * 2018-05-04 2018-10-19 上海像我信息科技有限公司 A kind of gaze estimation method and line-of-sight estimation device based on deep learning
CN108875524A (en) * 2018-01-02 2018-11-23 北京旷视科技有限公司 Gaze estimation method, device, system and storage medium
CN109344714A (en) * 2018-08-31 2019-02-15 电子科技大学 One kind being based on the matched gaze estimation method of key point
WO2019033569A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Eyeball movement analysis method, device and storage medium
WO2019033571A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Facial feature point detection method, apparatus and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793720A (en) * 2014-02-12 2014-05-14 北京海鑫科金高科技股份有限公司 Method and system for positioning eyes
KR20180014317A (en) * 2016-07-29 2018-02-08 씨티아이코리아 주식회사 A face certifying method with eye tracking using Haar-Like-Feature
WO2019033569A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Eyeball movement analysis method, device and storage medium
WO2019033571A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Facial feature point detection method, apparatus and storage medium
CN108171152A (en) * 2017-12-26 2018-06-15 深圳大学 Deep learning human eye sight estimation method, equipment, system and readable storage medium storing program for executing
CN108875524A (en) * 2018-01-02 2018-11-23 北京旷视科技有限公司 Gaze estimation method, device, system and storage medium
CN108268850A (en) * 2018-01-24 2018-07-10 成都鼎智汇科技有限公司 A kind of big data processing method based on image
CN108681699A (en) * 2018-05-04 2018-10-19 上海像我信息科技有限公司 A kind of gaze estimation method and line-of-sight estimation device based on deep learning
CN109344714A (en) * 2018-08-31 2019-02-15 电子科技大学 One kind being based on the matched gaze estimation method of key point

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI WEN等: "The Android-Based Acquisition and CNN-Based Analysis for Gaze Estimation in Eye Tracking", 《CHINESE CONFERENCE ON BIOMETRIC RECOGNITION》 *
王晶,苏光大;: "改进的双目立体视觉正面脸合成", 《应用科学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021082636A1 (en) * 2019-10-29 2021-05-06 深圳云天励飞技术股份有限公司 Region of interest detection method and apparatus, readable storage medium and terminal device
CN116030512A (en) * 2022-08-04 2023-04-28 荣耀终端有限公司 Gaze point detection method and device
CN116030512B (en) * 2022-08-04 2023-10-31 荣耀终端有限公司 Gaze point detection method and device

Also Published As

Publication number Publication date
CN109901716B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
US8988317B1 (en) Depth determination for light field images
CN106462949B (en) Depth transducer is calibrated and is corrected pixel-by-pixel
CN109584307B (en) System and method for improving calibration of intrinsic parameters of a camera
CN108230397A (en) Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium
EP2843590A2 (en) System and method for package dimensioning
CN108489423B (en) Method and system for measuring horizontal inclination angle of product surface
US20150153158A1 (en) Length measurement method and device of the same
US20130083990A1 (en) Using Videogrammetry to Fabricate Parts
CN108074237B (en) Image definition detection method and device, storage medium and electronic equipment
US20200175663A1 (en) Image processing system, server apparatus, image processing method, and image processing program
JP2007129709A (en) Method for calibrating imaging device, method for calibrating imaging system including arrangement of imaging devices, and imaging system
EP3783567B1 (en) Break analysis apparatus and method
US11562478B2 (en) Method and system for testing field of view
US20090087078A1 (en) Display testing apparatus and method
CN111508027A (en) Method and device for calibrating external parameters of camera
US10495512B2 (en) Method for obtaining parameters defining a pixel beam associated with a pixel of an image sensor comprised in an optical device
CN112146848A (en) Method and device for determining distortion parameter of camera
CN109901716A (en) Sight line point prediction model method for building up, device and sight line point prediction technique
US10375383B2 (en) Method and apparatus for adjusting installation flatness of lens in real time
CN102236790A (en) Image processing method and device
JP2010217984A (en) Image detector and image detection method
CN105427315B (en) Digital instrument image position testing method and device
US11069084B2 (en) Object identification method and device
CN108519215B (en) Pupil distance adaptability test system and method and test host
EP3572860A1 (en) A method, an apparatus and a computer program product for focusing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant