CN113221630A - Estimation method of human eye watching lens and application of estimation method in intelligent awakening - Google Patents
Estimation method of human eye watching lens and application of estimation method in intelligent awakening Download PDFInfo
- Publication number
- CN113221630A CN113221630A CN202110301136.6A CN202110301136A CN113221630A CN 113221630 A CN113221630 A CN 113221630A CN 202110301136 A CN202110301136 A CN 202110301136A CN 113221630 A CN113221630 A CN 113221630A
- Authority
- CN
- China
- Prior art keywords
- camera
- picture
- delta
- awakening
- estimation method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 210000001508 eye Anatomy 0.000 claims abstract description 24
- 108700041286 delta Proteins 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 8
- 210000005252 bulbus oculi Anatomy 0.000 claims description 6
- 230000002618 waking effect Effects 0.000 abstract 1
- 210000000744 eyelid Anatomy 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4418—Suspend and resume; Hibernate and awake
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/30—Control
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Emergency Alarm Devices (AREA)
Abstract
The invention discloses an estimation method of a human eye watching lens and application thereof in intelligent awakening, wherein a camera is adopted to shoot a human face in real time, and characteristic points of the human face in a picture are obtained through a human face recognition API; after 150 feature points in the picture are obtained, finding out the minimum containing rectangle of each eye; respectively calculating minimum inclusion rectangles as M1 and M2; calculating whether the human eyes watch the camera or not; in the process of eliminating the awakening words, the human face parameters are continuously identified, whether the user watches the camera or not is judged, and if the user watches the camera, the central control of the Internet of things is directly awakened; according to the invention, whether the human eyes watch the camera is detected through the camera, the purpose of waking up the artificial intelligent device is achieved through watching the camera, voice wake-up is not needed, and noise is avoided.
Description
Technical Field
The invention relates to the technical field of intelligent detection, in particular to an estimation method of a human eye watching lens and application of the estimation method in intelligent awakening.
Background
Various current intelligent assistants need to wake up words, such as young classmates, and the like, and need to speak and wake up a user, and an intelligent wake-up method without voice wake-up is lacking.
Therefore, an estimation method for a human eye gazing lens and its application in intelligent wake-up become a problem to be solved urgently.
Disclosure of Invention
The invention aims to judge whether a user needs to activate the central control of the Internet of things by using a camera, so that an intelligent device awakening method without an awakening word is provided.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: an estimation method of a human eye gazing lens and an application thereof in intelligent awakening comprise the following steps:
step 1: the method comprises the steps of shooting a face in real time by adopting a camera, and acquiring feature points of the face in a picture through a face recognition API;
step 2: after 150 feature points in the picture are obtained, finding out the minimum containing rectangle of each eye;
and step 3: respectively calculating minimum inclusion rectangles as M1 and M2;
and 4, step 4: setting the center point of the left eyeball as OM1 and the center point of the right eyeball as OM2, and calculating the coordinate of the eyebrow by dividing the horizontal and vertical coordinates of OM1 and OM2 by 2;
setting the positive central point of the picture as B, connecting the two points AB, and forming an included angle X by the line AB and the upper half section of the central axis at the moment, wherein the range of X is 0-360 degrees;
respectively calculating the central points of the rectangles M1 and M2 as O1 and O2, connecting O1 with OM1 to form a line segment S, connecting O2 with OM2 to form a line segment T, wherein the included angle formed by the S and the upper half part of the central axis of the whole picture is Y, and the included angle formed by the T and the upper half part of the central axis of the whole picture is Z;
setting the error as delta 1, calculating the absolute value of the difference value of Y and X as D, calculating the absolute value of the difference value of Z and X as E, and if the average F of D and E is less than delta 1, indicating that the human eyes look at the camera;
and 5: the calculation scheme of the steps 1-4 is only suitable for the estimated application scene, the human eye orientation needs to be judged more accurately in practical application, the use mode of the constant delta 1 needs to be changed into linear calculation, and the change is as follows:
the judgment result is that the algorithm of looking at the camera is changed into X (1-Delta 1) < (Y + Z)/2< X (1+ Delta1);
step 6: and in the process of eliminating the awakening words, continuously identifying the face parameters, repeating the steps 1 to 5 to calculate whether the user watches the camera, and directly awakening the central control of the Internet of things if the user watches the camera.
As an improvement, the value of the delta 1 is different from person to person, and data are identified and recorded into an algorithm to participate in calculation according to actual conditions.
Compared with the prior art, the invention has the advantages that: according to the invention, whether the human eyes watch the camera is detected through the camera, the aim of awakening the artificial intelligence equipment is achieved through watching the camera, voice awakening is not needed, and noise is avoided.
Detailed Description
The invention relates to an estimation method of a human eye watching lens and a specific implementation process of the method in the application of intelligent awakening, wherein the specific implementation process comprises the following steps:
the method comprises the steps of shooting a face in real time by adopting a camera, and acquiring feature points of the face in a picture through a face recognition API;
after 150 feature points in the picture are obtained, finding out the minimum containing rectangle of each eye;
respectively calculating minimum inclusion rectangles as M1 and M2;
setting the center point of the left eyeball as OM1 and the center point of the right eyeball as OM2, respectively adding the horizontal and vertical coordinates of OM1 and OM2, and dividing by 2 to calculate the coordinate of the eyebrow center as A;
setting the positive central point of the picture as B, connecting the two points AB, and forming an included angle X by the line AB and the upper half section of the central axis at the moment, wherein the range of X is 0-360 degrees;
respectively calculating the central points of the rectangles M1 and M2 as O1 and O2, connecting O1 with OM1 to form a line segment S, connecting O2 with OM2 to form a line segment T, wherein the included angle formed by the S and the upper half part of the central axis of the whole picture is Y, and the included angle formed by the T and the upper half part of the central axis of the whole picture is Z;
setting the error as delta 1, calculating the absolute value of the difference value of Y and X as D, calculating the absolute value of the difference value of Z and X as E, and if the average F of D and E is less than delta 1, indicating that the human eyes look at the camera;
the calculation scheme is only suitable for an estimated application scene, the human eye orientation needs to be judged more accurately in practical application, the use mode of the constant delta 1 needs to be changed into linear calculation, and the change is as follows:
the judgment result is that the algorithm of looking at the camera is changed into X (1-Delta 1) < (Y + Z)/2< X (1+ Delta1);
meanwhile, an error value caused by the rotation of the head is added; the parameters of each person's brain bag are different, so there is also a need to compensate for an error value Δ 2.
In consideration of the problem of camera resolution, in order to ensure that the effect is obvious in practical use, the setting of the camera and the shot pictures should follow the requirements shown in table 1;
table 1: camera parameters and photo requirements
Cracked eye | 25mm |
Focal length of camera | 4mm |
Photosensitive element | 7mm*4.15mm |
Photo size | 1920*1080 |
Size of photo eye crack | X number of pixels |
Corresponding distance range | Y25X 4 1920/7/X/100 m |
Suppose that when X is 55, Y ≈ 5 meters, that is, if a user within 5 meters is to be recognized, it is better to guarantee that the pixels of the eye splits in the photograph are larger than 55. Wherein the length of the fissure is the linear distance from the inner canthus point to the outer canthus point. In brief, the fissure (palpebral fissure) is the linear distance from the inner canthus point, which is the inner meeting point of the upper and lower eyelids, to the outer canthus point, which is the outer meeting point of the upper and lower eyelids.
In the process of eliminating the awakening words, the human face parameters are continuously identified, whether the user watches the camera or not is calculated, and if the user watches the camera, the central control of the Internet of things is directly awakened.
The present invention and its embodiments have been described above, but the description is not limitative, and the actual structure is not limited thereto. In summary, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (2)
1. An estimation method of a human eye gazing lens and application thereof in intelligent awakening are characterized in that: the method comprises the following steps:
step 1: the method comprises the steps of shooting a face in real time by adopting a camera, and acquiring feature points of the face in a picture through a face recognition API;
step 2: after 150 feature points in the picture are obtained, finding out the minimum containing rectangle of each eye;
and step 3: respectively calculating minimum inclusion rectangles as M1 and M2;
and 4, step 4: setting the center point of the left eyeball as OM1 and the center point of the right eyeball as OM2, respectively adding the horizontal and vertical coordinates of OM1 and OM2, and dividing by 2 to calculate the coordinate of the eyebrow center as A;
setting the positive central point of the picture as B, connecting the two points AB, and forming an included angle X by the line AB and the upper half section of the central axis at the moment, wherein the range of X is 0-360 degrees;
respectively calculating the central points of the rectangles M1 and M2 as O1 and O2, connecting O1 with OM1 to form a line segment S, connecting O2 with OM2 to form a line segment T, wherein the included angle formed by the S and the upper half part of the central axis of the whole picture is Y, and the included angle formed by the T and the upper half part of the central axis of the whole picture is Z;
setting the error as delta 1, calculating the absolute value of the difference value of Y and X as D, calculating the absolute value of the difference value of Z and X as E, and if the average F of D and E is less than delta 1, indicating that the human eyes look at the camera;
and 5: the calculation scheme of the steps 1-4 is only suitable for the estimated application scene, the human eye orientation needs to be judged more accurately in practical application, the use mode of the constant delta 1 needs to be changed into linear calculation, and the change is as follows:
the judgment result is that the algorithm of looking at the camera is changed into X (1-Delta 1) < (Y + Z)/2< X (1+ Delta1);
step 6: and in the process of eliminating the awakening words, continuously identifying the face parameters, repeating the steps 1 to 5 to calculate whether the user watches the camera, and directly awakening the central control of the Internet of things if the user watches the camera.
2. The method for estimating the human eye gaze lens and the application thereof in the smart wake-up according to claim 1, characterized in that: the value of the delta 1 is different from person to person, and data are identified and recorded into an algorithm to participate in calculation according to actual conditions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301136.6A CN113221630A (en) | 2021-03-22 | 2021-03-22 | Estimation method of human eye watching lens and application of estimation method in intelligent awakening |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301136.6A CN113221630A (en) | 2021-03-22 | 2021-03-22 | Estimation method of human eye watching lens and application of estimation method in intelligent awakening |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113221630A true CN113221630A (en) | 2021-08-06 |
Family
ID=77084044
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110301136.6A Pending CN113221630A (en) | 2021-03-22 | 2021-03-22 | Estimation method of human eye watching lens and application of estimation method in intelligent awakening |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113221630A (en) |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1423228A (en) * | 2002-10-17 | 2003-06-11 | 南开大学 | Apparatus and method for identifying gazing direction of human eyes and its use |
CN101344919A (en) * | 2008-08-05 | 2009-01-14 | 华南理工大学 | Sight tracing method and disabled assisting system using the same |
CN101520838A (en) * | 2008-02-27 | 2009-09-02 | 中国科学院自动化研究所 | Automatic-tracking and automatic-zooming method for acquiring iris images |
KR20090105531A (en) * | 2008-04-03 | 2009-10-07 | 슬림디스크 주식회사 | The method and divice which tell the recognized document image by camera sensor |
US20100097227A1 (en) * | 2008-10-21 | 2010-04-22 | Samsung Electronics Co., Ltd. | Alarm control apparatus and method using face recognition |
CN102013011A (en) * | 2010-12-16 | 2011-04-13 | 重庆大学 | Front-face-compensation-operator-based multi-pose human face recognition method |
CN102594999A (en) * | 2012-01-30 | 2012-07-18 | 郑凯 | Method and system for performing adaptive mobile phone energy conservation through face identification |
CN103561142A (en) * | 2013-10-24 | 2014-02-05 | 广东明创软件科技有限公司 | Method and mobile terminal for controlling alarm clock state through human eye recognition |
CN105700363A (en) * | 2016-01-19 | 2016-06-22 | 深圳创维-Rgb电子有限公司 | Method and system for waking up smart home equipment voice control device |
US20170156589A1 (en) * | 2015-12-04 | 2017-06-08 | Shenzhen University | Method of identification based on smart glasses |
CN106919918A (en) * | 2017-02-27 | 2017-07-04 | 腾讯科技(上海)有限公司 | A kind of face tracking method and device |
CN206353336U (en) * | 2016-12-07 | 2017-07-25 | 浙江交通职业技术学院 | A kind of anti-collision porcelain device in vehicle traveling process |
CN108098767A (en) * | 2016-11-25 | 2018-06-01 | 北京智能管家科技有限公司 | A kind of robot awakening method and device |
US20190057247A1 (en) * | 2016-02-23 | 2019-02-21 | Yutou Technology (Hangzhou) Co., Ltd. | Method for awakening intelligent robot, and intelligent robot |
CN111145739A (en) * | 2019-12-12 | 2020-05-12 | 珠海格力电器股份有限公司 | Vision-based awakening-free voice recognition method, computer-readable storage medium and air conditioner |
CN112099621A (en) * | 2020-08-12 | 2020-12-18 | 杭州同绘科技有限公司 | System and method for eye-fixation unlocking robot |
-
2021
- 2021-03-22 CN CN202110301136.6A patent/CN113221630A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1423228A (en) * | 2002-10-17 | 2003-06-11 | 南开大学 | Apparatus and method for identifying gazing direction of human eyes and its use |
CN101520838A (en) * | 2008-02-27 | 2009-09-02 | 中国科学院自动化研究所 | Automatic-tracking and automatic-zooming method for acquiring iris images |
KR20090105531A (en) * | 2008-04-03 | 2009-10-07 | 슬림디스크 주식회사 | The method and divice which tell the recognized document image by camera sensor |
CN101344919A (en) * | 2008-08-05 | 2009-01-14 | 华南理工大学 | Sight tracing method and disabled assisting system using the same |
US20100097227A1 (en) * | 2008-10-21 | 2010-04-22 | Samsung Electronics Co., Ltd. | Alarm control apparatus and method using face recognition |
CN102013011A (en) * | 2010-12-16 | 2011-04-13 | 重庆大学 | Front-face-compensation-operator-based multi-pose human face recognition method |
CN102594999A (en) * | 2012-01-30 | 2012-07-18 | 郑凯 | Method and system for performing adaptive mobile phone energy conservation through face identification |
CN103561142A (en) * | 2013-10-24 | 2014-02-05 | 广东明创软件科技有限公司 | Method and mobile terminal for controlling alarm clock state through human eye recognition |
US20170156589A1 (en) * | 2015-12-04 | 2017-06-08 | Shenzhen University | Method of identification based on smart glasses |
CN105700363A (en) * | 2016-01-19 | 2016-06-22 | 深圳创维-Rgb电子有限公司 | Method and system for waking up smart home equipment voice control device |
US20190057247A1 (en) * | 2016-02-23 | 2019-02-21 | Yutou Technology (Hangzhou) Co., Ltd. | Method for awakening intelligent robot, and intelligent robot |
CN108098767A (en) * | 2016-11-25 | 2018-06-01 | 北京智能管家科技有限公司 | A kind of robot awakening method and device |
CN206353336U (en) * | 2016-12-07 | 2017-07-25 | 浙江交通职业技术学院 | A kind of anti-collision porcelain device in vehicle traveling process |
CN106919918A (en) * | 2017-02-27 | 2017-07-04 | 腾讯科技(上海)有限公司 | A kind of face tracking method and device |
CN111145739A (en) * | 2019-12-12 | 2020-05-12 | 珠海格力电器股份有限公司 | Vision-based awakening-free voice recognition method, computer-readable storage medium and air conditioner |
CN112099621A (en) * | 2020-08-12 | 2020-12-18 | 杭州同绘科技有限公司 | System and method for eye-fixation unlocking robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI646444B (en) | Method for waking up intelligent robot and intelligent robot | |
CN104951084B (en) | Eye-controlling focus method and device | |
CN105095885B (en) | Human eye state detection method and detection device | |
CN106911962B (en) | Scene-based mobile video intelligent playing interaction control method | |
CN104539924A (en) | Holographic display method and holographic display device based on eye tracking | |
CN103412643B (en) | Terminal and its method for remote control | |
US9501691B2 (en) | Method and apparatus for detecting blink | |
CN102456137B (en) | Sight line tracking preprocessing method based on near-infrared reflection point characteristic | |
CN109948435A (en) | Sitting posture prompting method and device | |
CN104581113B (en) | Adaptive holographic display methods and holographic display based on viewing angle | |
CN110135370B (en) | Method and device for detecting living human face, electronic equipment and computer readable medium | |
CN107273814A (en) | The regulation and control method and regulator control system of a kind of screen display | |
CN106919250A (en) | A kind of based reminding method and device | |
TW201633215A (en) | System and method for protecting eyes | |
CN112069863B (en) | Face feature validity determination method and electronic equipment | |
CN104850228A (en) | Mobile terminal-based method for locking watch area of eyeballs | |
CN110148092A (en) | The analysis method of teenager's sitting posture based on machine vision and emotional state | |
Radlak et al. | A novel approach to the eye movement analysis using a high speed camera | |
CN104601981A (en) | Method for adjusting viewing angles based on human eyes tracking and holographic display device | |
CN111012131A (en) | Sleeping posture correction method and device and intelligent pillow | |
Chukoskie et al. | Quantifying gaze behavior during real-world interactions using automated object, face, and fixation detection | |
CN102508551B (en) | Sight capturing method and man-machine interaction method adopting sight capturing | |
CN113221630A (en) | Estimation method of human eye watching lens and application of estimation method in intelligent awakening | |
CN105741326A (en) | Target tracking method for video sequence based on clustering fusion | |
CN109426797A (en) | Doze detection device and doze detection method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |