CN107292219A - The method and electronic equipment of a kind of driving eye motion - Google Patents

The method and electronic equipment of a kind of driving eye motion Download PDF

Info

Publication number
CN107292219A
CN107292219A CN201610203050.9A CN201610203050A CN107292219A CN 107292219 A CN107292219 A CN 107292219A CN 201610203050 A CN201610203050 A CN 201610203050A CN 107292219 A CN107292219 A CN 107292219A
Authority
CN
China
Prior art keywords
result
eyes
fitting
eye
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610203050.9A
Other languages
Chinese (zh)
Inventor
武俊敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhang Ying Information Technology (shanghai) Co Ltd
Original Assignee
Zhang Ying Information Technology (shanghai) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhang Ying Information Technology (shanghai) Co Ltd filed Critical Zhang Ying Information Technology (shanghai) Co Ltd
Priority to CN201610203050.9A priority Critical patent/CN107292219A/en
Publication of CN107292219A publication Critical patent/CN107292219A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Abstract

The invention discloses a kind of method of driving eye motion and electronic equipment, methods described includes:Obtain the characteristic point of face eyes;The face eyes are fitted according to default eye model, fitting result is obtained;Linear Amplifer is carried out to the fitting result, result is amplified;Eyes in being expressed one's feelings according to the amplification result to target are driven, this method causes obtained result to have higher stability by larger fitting parameter, improve the anti-interference of result, cause final result that there is higher precision again by carrying out Linear Amplifer to fitting result, detected simultaneously by closing one's eyes, it is ensured that the correctness of eyes open and-shut mode.

Description

The method and electronic equipment of a kind of driving eye motion
Technical field
The present invention relates to field of computer technology, more particularly to a kind of method and electronics for driving eye motion is set It is standby.
Background technology
During recognition of face or expression migration, be to the identification and fitting of eyes it is very crucial, In the prior art, generally there is error greatly in the fitting to eyes, easily be disturbed, eye opening closed-eye state judges not Accurate the problem of.
The content of the invention
In order to solve the above problems, the invention provides a kind of method of driving eye motion and electronic equipment.
The technical scheme is as follows:
First aspect includes there is provided a kind of method for driving eye motion, methods described:
Obtain the characteristic point of face eyes;
The face eyes are fitted according to default eye model, fitting result is obtained;
Linear Amplifer is carried out to the fitting result, result is amplified;
Eyes in being expressed one's feelings according to the amplification result to target are driven.
It is described according to default eye model pair with reference in a first aspect, in the first possible implementation The face eyes are fitted, and obtaining fitting result includes:
Ridge regression is carried out according to default formula and default eye model to the characteristic point of the face eyes to estimate Meter;
The fitting result is obtained according to the ride regression estimater.
With reference to the first possible implementation of first aspect, in second of possible implementation, institute The span for stating the parameter of ridge regression is:[0.01,0.5].
It is possible at the third with reference to second of any possible implementation of first aspect to first aspect In implementation, it is described according to it is described amplification result to target express one's feelings in eyes be driven before, institute Stating method also includes:
Eye closing detection is carried out to the amplification result, if testing result changes the face eye to close one's eyes The state of eyeball is closed-eye state.
Second aspect is there is provided a kind of electronic equipment, and the electronic equipment includes:
Acquisition module, the characteristic point for obtaining face eyes;
Fitting module, for being fitted according to default eye model to the face eyes, obtains fitting As a result;
Amplification module, for carrying out Linear Amplifer to the fitting result, is amplified result;
Drive module, is driven for the eyes in being expressed one's feelings according to the amplification result to target.
With reference to second aspect, in the first possible implementation, the fitting module specifically for:
Ridge regression is carried out according to default formula and default eye model to the characteristic point of the face eyes to estimate Meter;
The fitting result is obtained according to the ride regression estimater.
With reference to the first possible implementation of second aspect, in second of possible implementation, institute The span for stating the parameter of ridge regression is:[0.01,0.5].
It is possible at the third with reference to second of any possible implementation of second aspect to second aspect In implementation, the electronic equipment also includes eye closing detection module, is used for:The amplification result is carried out Close one's eyes and detect, if testing result is closes one's eyes, the state for changing the face eyes is closed-eye state.
The third aspect there is provided a kind of electronic equipment, the equipment include memory, and with the storage The processor of device connection, wherein, the memory is used to store batch processing code, and the processor is called The program code that the memory is stored is used to perform following operation:
Obtain the characteristic point of face eyes;
The face eyes are fitted according to default eye model, fitting result is obtained;
Linear Amplifer is carried out to the fitting result, result is amplified;
Eyes in being expressed one's feelings according to the amplification result to target are driven.
With reference to the third aspect, in the first possible implementation, the processor calls the memory The program code stored is used to perform following operation:
Ridge regression is carried out according to default formula and default eye model to the characteristic point of the face eyes to estimate Meter;
The fitting result is obtained according to the ride regression estimater.
With reference to the first possible implementation of the third aspect, in second of possible implementation, institute The span for stating the parameter of ridge regression is:[0.01,0.5].
It is possible at the third with reference to second of any possible implementation of the third aspect to the third aspect In implementation, the program code that the processor calls the memory to be stored is used to perform following operation:
Eye closing detection is carried out to the amplification result, if testing result changes the face eye to close one's eyes The state of eyeball is closed-eye state.
The embodiments of the invention provide a kind of method of driving eye motion and electronic equipment, pass through larger plan Closing parameter causes obtained result to have higher stability, the anti-interference of result is improved, by intending Closing result progress Linear Amplifer causes final result to have higher precision again, while being detected by closing one's eyes, protects The correctness of eyes open and-shut mode is demonstrate,proved.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, institute in being described below to embodiment The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only the present invention Some embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, Other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart of the method for driving eye motion provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of the method for driving eye motion provided in an embodiment of the present invention;
Fig. 3 is a kind of flow chart of the method for driving eye motion provided in an embodiment of the present invention;
Fig. 4 is the structural representation of a kind of electronic equipment provided in an embodiment of the present invention;
Fig. 5 is the structural representation of a kind of electronic equipment provided in an embodiment of the present invention.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described.
The embodiments of the invention provide a kind of method for driving eye motion, this method can be by running electricity What the application program in sub- equipment was realized, the electronic equipment can be smart mobile phone, tablet personal computer and other electricity Any one in sub- equipment, the embodiment of the present invention is not limited to specific electronic equipment.
Embodiment one
The embodiments of the invention provide a kind of method for driving eye motion, shown in reference picture 1, method flow Including:
102nd, the characteristic point of face eyes is obtained.
Specifically, the process can include:
Face eyes are identified from frame of video;
Extract the characteristic point of the face eyes.
It is pointed out that the frame of video can in real time be gathered by imaging first-class recording apparatus, It can be read from video file, the embodiment of the present invention is not limited to the acquisition modes of frame of video.
104th, the face eyes are fitted according to default eye model, obtain fitting result.
Specifically, the process can be:
The characteristic point of the face eyes is fitted according to default formula and default eye model, obtained Take fitting result.
Wherein, the default eye model can be one group of multiple eye model pre-set, to institute Linear fit can be used by stating when face eyes are fitted, it would however also be possible to employ nonlinear fitting, and the present invention is real Example is applied not to be limited specific approximating method.
106th, Linear Amplifer is carried out to the fitting result, is amplified result.
Specifically, the process can be:
Linear Amplifer is carried out to the fitting result according to default linear function, result is amplified.
108th, the eyes in being expressed one's feelings according to the amplification result to target are driven.
Specifically, the process can be:
According to amplification result in face eyes characteristic point position, driving target expression in eyes to it is described Amplify the position corresponding of the characteristic point of face eyes in result.
It is linear by being carried out to fitting result the embodiments of the invention provide a kind of method for driving eye motion Amplification causes final result to have higher precision.
Embodiment two
The embodiments of the invention provide a kind of method for driving eye motion, shown in reference picture 2, method flow Including:
202nd, the characteristic point of face eyes is obtained.
Specifically, the process can include:
Face eyes are identified from frame of video;
Extract the characteristic point of the face eyes.
It is pointed out that the frame of video can in real time be gathered by imaging first-class recording apparatus, It can be read from video file, the embodiment of the present invention is not limited to the acquisition modes of frame of video.
204th, the face eyes are fitted according to default eye model, obtain fitting result.
Specifically, using the characteristic point of the face eyes as dependent variable Y, using the default eye model as certainly The formula of the linear regression of variable X is:
Y=aX+b (1)
Wherein, a and b are the parameters of linear regression.
Formula (2) can be derived according to formula (1):
Y-b XT=aXXT (2)
According to formula (2) and ridge regression algorithm, formula (3) can be derived:
A=Y-b XT×XXT+ λI -1 (3)
Wherein, I is unit matrix, and λ is ridge regression parameter.
It can be calculated according to the characteristic point of the face eyes and the default eye model and formula (3) Go out a and b estimated value a' and b', can calculate and obtain according to a' and b' and the default eye model X Y'=a'X+b', wherein Y' are the fitting result, and Y' is the estimation to Y.
Exemplary, when the quantity of the characteristic point of the face eyes is 6, the default eye model For 8 when, Y, a and b are 12 dimensional vectors, and X is the matrixes of 12 × 8 dimensions.
It is pointed out that in order to obtain higher stability, a larger ridge regression parameter λ can be taken, Exemplary, λ ∈ [0.01,0.5], it is preferred that λ=0.1, the span of the ridge regression parameter is only example, The embodiment of the present invention is not limited to specific ridge regression parameter scope.
206th, Linear Amplifer is carried out to the fitting result, is amplified result.
Specifically, the process can be:
Linear Amplifer is carried out to the fitting result according to default linear function, result is amplified.
The linear function can be:
F (x)=cx+d (4)
Wherein c, d are amplification coefficients, can be set according to actual conditions, by what is obtained in step 204 Fitting result Y' substitutes into formula (4), can be amplified result Y "=cY'+d, it is preferred that amplification coefficient c Value be 1.1, amplification coefficient d value is 0.1.
It is pointed out that the linear function is only example, it is also an option that other functions are amplified, The embodiment of the present invention is not limited to specific magnification function, and the value of the amplification coefficient is also only example, Amplification coefficient is also an option that other values, the embodiment of the present invention to specific amplification coefficient without limitation.
208th, the eyes in being expressed one's feelings according to the amplification result to target are driven.
Specifically, the process can be:
According to amplification result in face eyes characteristic point position, driving target expression in eyes to it is described Amplify the position corresponding of the characteristic point of face eyes in result.
The embodiments of the invention provide a kind of method for driving eye motion, caused by larger fitting parameter Obtained result has higher stability, the anti-interference of result is improved, by being carried out to fitting result Linear Amplifer causes final result to have higher precision again.
Embodiment three
The embodiments of the invention provide a kind of method for driving eye motion, shown in reference picture 3, method flow Including:
302nd, the characteristic point of face eyes is obtained.
Specifically, the process can include:
Face eyes are identified from frame of video;
Extract the characteristic point of the face eyes.
It is pointed out that the frame of video can in real time be gathered by imaging first-class recording apparatus, It can be read from video file, the embodiment of the present invention is not limited to the acquisition modes of frame of video.
304th, the face eyes are fitted according to default eye model, obtain fitting result.
Specifically, using the characteristic point of the face eyes as dependent variable Y, using the default eye model as certainly The formula of the linear regression of variable X is:
Y=aX+b (1)
Wherein, a and b are the parameters of linear regression.
Formula (2) can be derived according to formula (1):
Y-b XT=aXXT (2)
According to formula (2) and ridge regression algorithm, formula (3) can be derived:
A=Y-b XT×XXT+λI-1 (3)
Wherein, I is unit matrix, and λ is ridge regression parameter.
It can be calculated according to the characteristic point of the face eyes and the default eye model and formula (3) Go out a and b estimated value a' and b', can calculate and obtain according to a' and b' and the default eye model X Y'=a'X+b', wherein Y' are the fitting result, and Y' is the estimation to Y.
Exemplary, when the quantity of the characteristic point of the face eyes is 6, the default eye model For 10 when, Y, a and b are 12 dimensional vectors, and X is the matrixes of 12 × 10 dimensions.
It is pointed out that in order to obtain higher stability, a larger ridge regression parameter λ can be taken, Exemplary, λ ∈ [0.01,0.5], it is preferred that λ=0.2, the span of the ridge regression parameter is only example, The embodiment of the present invention is not limited to specific ridge regression parameter scope.
306th, Linear Amplifer is carried out to the fitting result, is amplified result.
Specifically, the process can be:
Linear Amplifer is carried out to the fitting result according to default linear function, result is amplified.
The linear function can be:
F (x)=cx+d (4)
Wherein c, d are amplification coefficients, can be set according to actual conditions, by what is obtained in step 304 Fitting result Y' substitutes into formula (4), can be amplified result Y "=cY'+d, it is preferred that amplification coefficient c Value be 1.05, amplification coefficient d value is 0.15.
It is pointed out that the linear function is only example, it is also an option that other functions are amplified, The embodiment of the present invention is not limited to specific magnification function, and the value of the amplification coefficient is also only example, Amplification coefficient is also an option that other values, the embodiment of the present invention to specific amplification coefficient without limitation.
308th, eye closing detection is carried out to the amplification result, if testing result changes the people to close one's eyes The state of face eyes is closed-eye state.
Specifically, the process can be:
Judge that the position for amplifying the corresponding characteristic point in upper eyelid in result is corresponding with corresponding palpebra inferior Whether the distance between position of characteristic point is less than predetermined threshold value, if less than the predetermined threshold value, then judges institute It is closed-eye state to state the corresponding face eyes of amplification result, and the state for changing the face eyes is closed-eye state;
Otherwise, judge the position for amplifying the corresponding characteristic point in upper eyelid in result whether less than corresponding The position of the corresponding characteristic point of palpebra inferior, if it is, judge the corresponding face eyes of the amplification result as Closed-eye state, the state for changing the face eyes is closed-eye state.
310th, the eyes in being expressed one's feelings according to the amplification result to target are driven.
Specifically, the process can be:
If the state of the face eyes is closed-eye state, the eyes in modification target expression are eye closing shape State;Otherwise,
According to amplification result in face eyes characteristic point position, driving target expression in eyes to it is described Amplify the position corresponding of the characteristic point of face eyes in result.
The embodiments of the invention provide a kind of method for driving eye motion, caused by larger fitting parameter Obtained result has higher stability, the anti-interference of result is improved, by being carried out to fitting result Linear Amplifer causes final result to have higher precision again.
Example IV
The embodiments of the invention provide a kind of electronic equipment, shown in reference picture 4, the electronic equipment includes:
Acquisition module 401, the characteristic point for obtaining face eyes;
Fitting module 402, for being fitted according to default eye model to the face eyes, obtains and intends Close result;
Amplification module 403, for carrying out Linear Amplifer to the fitting result, is amplified result;
Drive module 404, is driven for the eyes in being expressed one's feelings according to the amplification result to target.
Optionally, fitting module 402 specifically for:
Ridge regression is carried out according to default formula and default eye model to the characteristic point of the face eyes to estimate Meter;
The fitting result is obtained according to the ride regression estimater.
Wherein, the span of the parameter of the ridge regression is:[0.01,0.5].
Optionally, the electronic equipment also includes eye closing detection module 405, is used for:The amplification result is entered Row, which is closed one's eyes, to be detected, if testing result is closes one's eyes, the state for changing the face eyes is closed-eye state.
The embodiments of the invention provide a kind of electronic equipment, the result for obtain by larger fitting parameter With higher stability, the anti-interference of result is improved, is made by carrying out Linear Amplifer to fitting result Obtain final result has higher precision again.
Embodiment five
The embodiments of the invention provide a kind of electronic equipment, shown in reference picture 5, the electronic equipment includes storage Device 501 and the processor 502 being connected with memory 501, wherein, memory 501 is used to store one group of journey Sequence code, the program code that processor 502 calls memory 501 to be stored is used to perform following operation:
Obtain the characteristic point of face eyes;
The face eyes are fitted according to default eye model, fitting result is obtained;
Linear Amplifer is carried out to the fitting result, result is amplified;
Eyes in being expressed one's feelings according to the amplification result to target are driven.
Optionally, the program code that processor 502 calls memory 501 to be stored is used to perform following operation:
Ridge regression is carried out according to default formula and default eye model to the characteristic point of the face eyes to estimate Meter;
The fitting result is obtained according to the ride regression estimater.
Wherein, the span of the parameter of the ridge regression is:[0.01,0.5].
Optionally, the program code that processor 502 calls memory 501 to be stored is used to perform following operation:
Eye closing detection is carried out to the amplification result, if testing result changes the face eye to close one's eyes The state of eyeball is closed-eye state.
The embodiments of the invention provide a kind of electronic equipment, the result for obtain by larger fitting parameter With higher stability, the anti-interference of result is improved, is made by carrying out Linear Amplifer to fitting result Obtain final result has higher precision again, is detected while passing through and closing one's eyes, it is ensured that eyes open and-shut mode is just True property.
It the above is only presently preferred embodiments of the present invention, any formal limitation not made to the present invention, Although the present invention is disclosed as above with preferred embodiment, but is not limited to the present invention, this area is common Technical staff without departing from the scope of the present invention, makes when using the technology contents of the disclosure above A little change is modified to the equivalent embodiments of equivalent variations, as long as be without departing from technical solution of the present invention content, Any simple modification, equivalent variations and modification that technical spirit according to the present invention is made to above example, In the range of still falling within technical solution of the present invention.

Claims (8)

1. a kind of method for driving eye motion, it is characterised in that methods described includes:
Obtain the characteristic point of face eyes;
The face eyes are fitted according to default eye model, fitting result is obtained;
Linear Amplifer is carried out to the fitting result, result is amplified;
Eyes in being expressed one's feelings according to the amplification result to target are driven.
2. according to the method described in claim 1, it is characterised in that described according to default eye model pair The face eyes are fitted, and obtaining fitting result includes:
Ridge regression is carried out according to default formula and default eye model to the characteristic point of the face eyes to estimate Meter;
The fitting result is obtained according to the ride regression estimater.
3. method according to claim 2, it is characterised in that the span of the parameter of the ridge regression For:[0.01,0.5].
4. the method according to claims 1 to 3 any one, it is characterised in that described according to institute State amplification result to target express one's feelings in eyes be driven before, methods described also includes:
Eye closing detection is carried out to the amplification result, if testing result changes the face eye to close one's eyes The state of eyeball is closed-eye state.
5. a kind of electronic equipment, it is characterised in that the electronic equipment includes:
Acquisition module, the characteristic point for obtaining face eyes;
Fitting module, for being fitted according to default eye model to the face eyes, obtains fitting As a result;
Amplification module, for carrying out Linear Amplifer to the fitting result, is amplified result;
Drive module, is driven for the eyes in being expressed one's feelings according to the amplification result to target.
6. electronic equipment according to claim 5, it is characterised in that the fitting module specifically for:
Ridge regression is carried out according to default formula and default eye model to the characteristic point of the face eyes to estimate Meter;
The fitting result is obtained according to the ride regression estimater.
7. equipment according to claim 6, it is characterised in that the span of the parameter of the ridge regression For:[0.01,0.5].
8. the electronic equipment according to claim 5 to 7 any one, it is characterised in that the electronics Equipment also includes eye closing detection module, is used for:Eye closing detection is carried out to the amplification result, if detection knot Fruit is closes one's eyes, then the state for changing the face eyes is closed-eye state.
CN201610203050.9A 2016-04-01 2016-04-01 The method and electronic equipment of a kind of driving eye motion Pending CN107292219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610203050.9A CN107292219A (en) 2016-04-01 2016-04-01 The method and electronic equipment of a kind of driving eye motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610203050.9A CN107292219A (en) 2016-04-01 2016-04-01 The method and electronic equipment of a kind of driving eye motion

Publications (1)

Publication Number Publication Date
CN107292219A true CN107292219A (en) 2017-10-24

Family

ID=60087499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610203050.9A Pending CN107292219A (en) 2016-04-01 2016-04-01 The method and electronic equipment of a kind of driving eye motion

Country Status (1)

Country Link
CN (1) CN107292219A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739712A (en) * 2010-01-25 2010-06-16 四川大学 Video-based 3D human face expression cartoon driving method
CN101763503A (en) * 2009-12-30 2010-06-30 中国科学院计算技术研究所 Face recognition method of attitude robust
US20150035825A1 (en) * 2013-02-02 2015-02-05 Zhejiang University Method for real-time face animation based on single video camera
CN104616347A (en) * 2015-01-05 2015-05-13 掌赢信息科技(上海)有限公司 Expression migration method, electronic equipment and system
CN105184249A (en) * 2015-08-28 2015-12-23 百度在线网络技术(北京)有限公司 Method and device for processing face image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763503A (en) * 2009-12-30 2010-06-30 中国科学院计算技术研究所 Face recognition method of attitude robust
CN101739712A (en) * 2010-01-25 2010-06-16 四川大学 Video-based 3D human face expression cartoon driving method
US20150035825A1 (en) * 2013-02-02 2015-02-05 Zhejiang University Method for real-time face animation based on single video camera
CN104616347A (en) * 2015-01-05 2015-05-13 掌赢信息科技(上海)有限公司 Expression migration method, electronic equipment and system
CN105184249A (en) * 2015-08-28 2015-12-23 百度在线网络技术(北京)有限公司 Method and device for processing face image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄雄 等: "基于肤色模型和灰度投影的人眼定位算法", 《重庆理工大学学报(自然科学)》 *

Similar Documents

Publication Publication Date Title
KR102364993B1 (en) Gesture recognition method, apparatus and device
WO2019174439A1 (en) Image recognition method and apparatus, and terminal and storage medium
CN102841354A (en) Vision protection implementation method of electronic equipment with display screen
US20110188712A1 (en) Method and apparatus for determining fake image
US9141851B2 (en) Deformable expression detector
CN104021377B (en) A kind of mutual capacitance type palm grain identification method, device and touch-screen
CN107633237B (en) Image background segmentation method, device, equipment and medium
CN107454250A (en) Use eye care method during mobile terminal, device, mobile terminal and storage medium
CN107486863B (en) Robot active interaction method based on perception
CN104125327A (en) Screen rotation control method and system
US10282601B2 (en) Electronic device and gesture recognition method applied therein
US20160104036A1 (en) Method and apparatus for detecting blink
US9047504B1 (en) Combined cues for face detection in computing devices
CN106204573A (en) A kind of food control method and system of intelligent refrigerator
CN106412420B (en) It is a kind of to interact implementation method of taking pictures
CN107704813A (en) A kind of face vivo identification method and system
CN106600903A (en) Image-identification-based early-warning method and apparatus
CN107103271A (en) A kind of method for detecting human face
CN112843425B (en) Sleeping posture detection method and device based on sleeping pillow, electronic equipment and storage medium
EP3757878A1 (en) Head pose estimation
CN107292219A (en) The method and electronic equipment of a kind of driving eye motion
CN113269010B (en) Training method and related device for human face living body detection model
CN107506682A (en) A kind of man face characteristic point positioning method and electronic equipment
CN112183200A (en) Eye movement tracking method and system based on video image
EP2793102A2 (en) Information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171024

WD01 Invention patent application deemed withdrawn after publication