CN111444789B - Myopia prevention method and system based on video induction technology - Google Patents

Myopia prevention method and system based on video induction technology Download PDF

Info

Publication number
CN111444789B
CN111444789B CN202010174241.3A CN202010174241A CN111444789B CN 111444789 B CN111444789 B CN 111444789B CN 202010174241 A CN202010174241 A CN 202010174241A CN 111444789 B CN111444789 B CN 111444789B
Authority
CN
China
Prior art keywords
distance
eye
eyeball
feedback
right eyeballs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010174241.3A
Other languages
Chinese (zh)
Other versions
CN111444789A (en
Inventor
杨曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Time Zhihui Technology Co ltd
Original Assignee
Shenzhen Time Zhihui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Time Zhihui Technology Co ltd filed Critical Shenzhen Time Zhihui Technology Co ltd
Priority to CN202010174241.3A priority Critical patent/CN111444789B/en
Publication of CN111444789A publication Critical patent/CN111444789A/en
Application granted granted Critical
Publication of CN111444789B publication Critical patent/CN111444789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention is applicable to the technical field of video induction, and provides a myopia prevention method based on a video induction technology, which comprises the following steps: shooting a main image in real time and identifying and acquiring a face image in the main image; recognizing and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates; calculating the imaging distance between the left and right eyeballs and the imaging unit, and respectively calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle; and analyzing whether the eye distance and the stay time of the eye distance exceed corresponding preset prevention thresholds, and if so, generating feedback reminding information to remind a user. A myopia prevention system based on the video induction technology is also provided. Therefore, the invention can update the actual eye distance between the eyes of the user and the focused point in real time, and prompt and correct the reading posture of the user in time, thereby playing a role in preventing myopia.

Description

Myopia prevention method and system based on video induction technology
Technical Field
The invention relates to the technical field of video induction, in particular to a myopia prevention method and a myopia prevention system based on a video induction technology.
Background
The proportion of teenagers in China with myopia is the highest worldwide. According to the report published by the national health committee at the end of april 2019, the rate of myopia in teenagers and children across the country is as high as 53.36%, and the proportion of myopia increases with age, with myopia in about 81% of high school students. Myopia prevention has become a major problem for the healthy growth of young children today.
It is widely accepted by domestic and foreign studies that incorrect eye patterns can cause teenager children to develop myopia, including: 1. too close an eye distance, too long an eye time of 2, and uneven eyes of 3, left and right eyes. Incorrect eye use can lead to increased muscle pressure of the eyeball and prolonged eyeball; the eyes can lose the adjusting force for a long time, the eyes are not correct in refractive, the eyes can not focus on near objects, and finally myopia is formed.
Chinese patent CN104077583a discloses a camera-based myopia prevention device, which requires a user to determine a proper distance between the device and an electronic device by himself during primary calibration, and take a user image as a standard image through a camera of the device, so as to set a distance between a face and a screen which meet the standard; when the camera operates, the camera continuously shoots the image of the user and compares the image with the standard image, and if the shot image of the user is larger than the standard image, the distance between the user and the electronic device is judged to be smaller than the proper distance preset by the user. The prior art needs to be checked for the first time before use to acquire a standard image for distance judgment, and the prior art lacks an automatic depth sensing system to acquire actual depth distance data and can not automatically execute the eye use distance specified by an eye protection rule for detection; meanwhile, the prior art only detects the distance between the user and the camera, and does not accurately detect the actual distance between the left and right eyes of the user and the fixation point such as a book on a desktop; if the user moves the camera device or the desktop book position during use, the standard image relied on by the prior art is not more valuable, and the distance judgment is inaccurate.
In summary, the conventional method has many problems in practical use, so that improvement is necessary.
Disclosure of Invention
Aiming at the defects, the invention aims to provide a myopia prevention method and a myopia prevention system based on a video sensing technology, which can update the practical eye distance between eyes of a user and a focused point in real time, prompt and correct the reading posture of the user in time and play a role in preventing myopia.
In order to achieve the above object, the present invention provides a myopia prevention method based on a video induction technology, comprising:
a camera shooting identification step, namely shooting a main image in real time and identifying and acquiring a face image in the main image;
an eyeball tracking step of identifying and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates;
an eye distance step of calculating the imaging distance between the left and right eyeballs and the imaging unit, and calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle;
and analyzing feedback, namely analyzing whether the eye distance and the stay time of the eye distance exceed corresponding preset prevention thresholds or not, and if so, generating feedback reminding information to remind a user.
Preferably, the analyzing and feeding back step further includes:
and a feedback information processing step, namely receiving the feedback reminding information and triggering and generating multimedia display information and/or functional operation signals according to the feedback reminding information.
According to the myopia prevention method based on the video induction technology, the eye tracking step comprises the following steps:
an eyeball identification step of analyzing and identifying left and right eyeballs in the face image;
an eyeball coordinate step of respectively positioning a first group of coordinate points of iris center points of the left eyeball and the right eyeball and a second group of coordinate points of symmetrical eye corners of the left eyeball and the right eyeball in the face image;
and a gazing angle step of calculating vectors corresponding to the first group of coordinate points and the second group of coordinate points of the left and right eyeballs and analyzing the changes of the vectors to calculate gazing angles of the left and right eyeballs.
According to the myopia prevention method based on the video induction technology, the eye distance using step comprises the following steps:
a depth induction calculation step of calculating the shooting distance between the left eyeball and the right eyeball and the shooting unit according to a camera depth induction algorithm;
and an eye distance conversion step of calculating a left eye distance and a right eye distance between the left and right eyeballs and a fixation point according to the imaging distance and the fixation angle corresponding to the left and right eyeballs.
According to the myopia prevention method based on the video induction technology, the analysis feedback step comprises the following steps:
a left-right eyeball distance comparison step of comparing the eye use distances of the left-right eyeballs to calculate an uneven value between the eye use distances of the left-right eyeballs; and/or
And a time recording step of recording the stay time of the eye distance of the left and right eyeballs.
According to the myopia prevention method based on the video induction technology, the analysis feedback step further comprises:
a first analysis feedback step of analyzing and judging whether the eye distance of the left and right eyeballs exceeds a preset distance threshold, and if so, generating first feedback reminding information;
a second analysis feedback step of analyzing and judging whether the uneven value exceeds a preset uneven value threshold or not if the eye distance does not exceed the distance threshold, and generating second feedback reminding information if the eye distance does not exceed the distance threshold;
and a third analysis feedback step, wherein if the uneven value does not exceed the uneven value threshold, whether the residence time exceeds a preset time threshold is analyzed and judged, and if so, third feedback reminding information is generated.
More preferably, the image capturing and recognizing step includes:
A real-time shooting step of shooting a main image reflected in the lens in real time;
a face image step of analyzing and identifying at least one face image in the main image;
and a face recognition step, namely matching the face image with a pre-stored face image in a pre-stored user database to obtain corresponding user identity information in a matching way.
Also provided is a myopia prevention system based on a video induction technique, comprising:
the camera shooting identification unit is used for shooting a main image in real time and identifying and acquiring a face image in the main image;
an eyeball tracking unit, which is used for identifying and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates;
an eye distance unit for calculating the imaging distance between the left and right eyeballs and the imaging unit, and calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle;
and the analysis feedback unit is used for analyzing whether the eye use distance and the stay time of the eye use distance exceed corresponding preset prevention thresholds or not, and if so, generating feedback reminding information to remind a user.
Preferably, the method further comprises the steps of:
and the feedback information processing unit is used for receiving the feedback reminding information and triggering and generating multimedia display information and/or functional operation signals according to the feedback reminding information.
The myopia prevention system based on the video induction technology, the eye tracking unit comprises:
the eyeball identification subunit is used for analyzing and identifying left and right eyeballs in the face image;
an eyeball coordinate subunit, configured to respectively locate, in the face image, a first set of coordinate points of iris center points of the left and right eyeballs and a second set of coordinate points of symmetrical corners of the left and right eyeballs;
and the gazing angle subunit is used for calculating vectors corresponding to the first group of coordinate points and the second group of coordinate points of the left eyeball and the right eyeball and analyzing the changes of the vectors so as to calculate gazing angles of the left eyeball and the right eyeball.
The myopia prevention system based on the video induction technology, wherein the eye distance unit comprises:
a depth induction calculation subunit, configured to calculate imaging distances between the left and right eyeballs and the imaging unit according to a camera depth induction algorithm;
and the eye-using distance changing sub-unit is used for respectively calculating the left eye-using distance and the right eye-using distance between the left eyeball and the right eyeball and the fixation point according to the shooting distance and the fixation angle corresponding to the left eyeball and the right eyeball.
According to the myopia prevention system based on the video induction technology, the analysis feedback unit comprises:
a left-right eyeball distance comparison subunit for comparing the eye-using distances of the left and right eyeballs to calculate an uneven value between the eye-using distances of the left and right eyeballs; and/or
And the time recording subunit is used for respectively recording the stay time of the eye using distance of the left eyeball and the right eyeball.
According to the myopia prevention system based on the video induction technology, the analysis feedback unit further comprises:
the first analysis feedback subunit is used for analyzing and judging whether the eye distance between the left eyeball and the right eyeball exceeds a preset distance threshold, and if so, generating first feedback reminding information;
the second analysis feedback subunit is used for analyzing and judging whether the uneven value exceeds a preset uneven value threshold or not if the eye distance does not exceed the distance threshold, and generating second feedback reminding information if the eye distance does not exceed the distance threshold;
and the third analysis feedback subunit is used for analyzing and judging whether the residence time exceeds a preset time threshold or not if the non-average value does not exceed the non-average value threshold, and generating third feedback reminding information if the residence time exceeds the preset time threshold.
More preferably, the image capturing and recognizing unit includes:
the camera shooting unit is used for shooting a main image reflected in the lens in real time;
a face image subunit, configured to parse and identify at least one face image in the main image;
and the face recognition subunit is used for matching the face image with a pre-stored face image in a pre-stored user database so as to obtain corresponding user identity information in a matching way.
The myopia prevention method based on the video sensing technology is mainly applied to myopia prevention of teenagers and children, and by means of depth sensing and eyeball tracking technology, the distance between the left eye and the right eye of a user and books is calculated through video tracking, and meanwhile, whether the eye distance and the eye time of the user are proper or not is judged according to eye protection criteria of myopia prevention. If the user is detected to read or write for a long time and a short distance, the system can give a prompt to the user in real time, so that the purpose of helping teenagers and children prevent myopia is achieved.
Drawings
Fig. 1 is a flowchart showing steps of a myopia prevention method based on a video sensing technology according to a first embodiment of the present invention;
FIG. 2 is a flowchart showing steps of a myopia prevention method based on a video sensing technology according to a second embodiment of the present invention;
FIG. 3 is a flowchart showing the preferred eye tracking steps of a myopia prevention method based on video sensing technology according to the first or second embodiment of the present invention;
FIG. 4 is a flowchart showing the preferred eye distance steps of a myopia prevention method based on video sensing technology according to the first or second embodiment of the present invention;
FIG. 5 is a flowchart showing the preferred feedback steps of the method for preventing myopia based on video sensing technology according to the first or second embodiment of the present invention;
fig. 6 is a flowchart showing the steps of capturing and identifying a myopia prevention method based on a video sensing technology according to the first embodiment or the second embodiment of the present invention;
fig. 7 is a block diagram of a myopia prevention system based on video sensing technology according to the first embodiment of the present invention;
fig. 8 is a block diagram of a myopia prevention system based on video sensing technology according to the second embodiment of the present invention;
FIG. 9 is a block diagram showing a preferred structure of the eye tracking unit of the myopia prevention system based on the video sensing technology according to the first embodiment or the second embodiment of the present invention;
FIG. 10 is a block diagram showing the construction of a preferred eye distance unit of a myopia prevention system based on video sensing technology according to the first or second embodiment of the present invention;
FIG. 11 is a block diagram showing the structure of an analysis feedback unit according to the first embodiment or the second embodiment of the myopia prevention system based on the video sensing technology;
fig. 12 is a block diagram showing the structure of the camera recognition unit of the myopia prevention system based on the video sensing technology according to the first embodiment or the second embodiment of the present invention;
FIG. 13 is a schematic diagram showing a specific workflow of a myopia prevention system based on video sensing technology according to the second embodiment of the present invention;
fig. 14 is a schematic view showing a structure of a calculated camera distance of a myopia prevention system according to the first embodiment or the second embodiment of the present invention;
fig. 15 is a schematic diagram of a structure of an eye distance calculating system for myopia prevention based on a video sensing technology according to the first embodiment or the second embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that references in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Furthermore, such phrases are not intended to refer to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Furthermore, certain terms are used throughout the specification and the claims that follow to refer to particular components or parts, and it will be understood by those of ordinary skill in the art that manufacturers may refer to a component or part by different terms or terminology. The present specification and the following claims do not take the form of an element or component with the difference in name, but rather take the form of an element or component with the difference in function as a criterion for distinguishing. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. The term "coupled," as used herein, includes any direct or indirect electrical connection. Indirect electrical connection means include connection via other devices.
Fig. 7 shows a myopia prevention system 100 according to the first embodiment of the present invention, which comprises a camera recognition unit 10, an eye tracking unit 20, an eye distance unit 30, and an analysis feedback unit 40, wherein:
the camera recognition unit 10 is used for capturing a main image in real time and recognizing and acquiring a face image in the main image; the eyeball tracking unit 20 is used for identifying and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates; the eye distance 30 is used for calculating the imaging distance between the left and right eyeballs and the imaging unit 101, and calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle; the analysis feedback unit 40 is configured to analyze whether the eye distance and the residence time of the eye distance exceed corresponding preset prevention thresholds, and if yes, generate feedback reminding information to remind the user. The method is used for detecting left and right eyeballs of a user through a video sensing technology, calculating the gazing angle and the eye distance between the left and right eyeballs and the gazing point in real time, setting corresponding prevention threshold values according to the eye protection rules disclosed in the prior art, analyzing the left and right eyeballs, detecting whether the reading posture, the sitting posture, the continuous reading and the like of the user are irregular in real time, and accordingly carrying out feedback reminding to achieve the function of preventing myopia. Even if the user moves the gazed object, such as a book, during the operation, the book position is changed, the system can still update the actual eye distance between the eyes of the user and the gazed point on the book in real time, so that automatic sensing is realized to feed back and remind the user.
Eye data collected by the system is not used for punishing users with incorrect eyes; in a classroom or at home, the system can change the correct eye data of students into an interactive multi-player competition game, and the students are encouraged to voluntarily prevent myopia through the correct eye competition with other students; according to the invention, through an eyeball tracking technology, the system can continuously record and analyze the gazing angle and duration of eyes of a student in a class and in a homerun, and judge whether the student gazes at a book or a classroom blackboard in front or can not concentrate on the eyes everywhere, so that the data of the concentration degree of the student in learning can be obtained; the system can report the concentration degree of the students to teachers or parents, so that the teachers properly adjust the teaching mode, the parents properly encourage the students, the learning concentration capacity of the students is effectively cultivated, and the learning performance of the students is improved in long term.
Fig. 8 shows a myopia prevention system 200 based on the video sensing technology according to the second embodiment of the present invention, which includes a camera recognition unit 10, an eye tracking unit 20, an eye distance unit 30, an analysis feedback unit 40, and a feedback information processing unit 50, and is the same as the first embodiment: the camera recognition unit 10 is used for capturing a main image in real time and recognizing and acquiring a face image in the main image; the eyeball tracking unit 20 is used for identifying and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates; the eye distance 30 is used for calculating the imaging distance between the left and right eyeballs and the imaging unit 101, and calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle; the analysis feedback unit 40 is configured to analyze whether the eye distance and the residence time of the eye distance exceed corresponding preset prevention thresholds, and if yes, generate feedback reminding information to remind the user. The difference from the first embodiment is that: the embodiment is further provided with a feedback information processing unit 50, which is used for receiving the feedback reminding information and triggering and generating multimedia display information and/or functional operation signals according to the feedback reminding information; the multimedia information presentation information comprises but is not limited to sound, voice, light, image signals, video signals, and/or exercise programs for eye care and neck and shoulder relaxation, and the like, and the function operation signals comprise but are not limited to screen display, function operation, application programs, computer software and/or prompt information sending to electronic equipment of a preset third party account (such as a teacher account and/or a parent account of a student) and the like.
The units in the first embodiment or the second embodiment may be connected by a wired or wireless network, and the wireless communication network may be a wireless network, a bluetooth connection, a mobile communication network, or other wireless communication methods.
Referring to fig. 9, optionally, the eye tracking unit 20 includes an eye recognition subunit 201, an eye coordinate subunit 202, and a gaze angle subunit 203, where:
the eyeball identification subunit 201 is configured to analyze and identify left and right eyeballs in the face image; the eyeball coordinate subunit 202 is configured to respectively locate a first set of coordinate points of iris center points of the left and right eyeballs and a second set of coordinate points of symmetrical corners of the left and right eyeballs in the face image; the gaze angle subunit 203 is configured to calculate vectors corresponding to the first set of coordinate points and the second set of coordinate points of the left and right eyeballs, and analyze changes of the vectors to calculate gaze angles of the left and right eyeballs. Specifically, according to an image analysis technology, a left eyeball and a right eyeball are analyzed and identified in a face image, and then a center point of a left eyeball iris and a center point of a right eyeball iris corresponding to the left eyeball or the right eyeball are positioned; and the corners of the left and right eyeballs are symmetrical, for example, the left corner of the left eyeball and the right corner of the right eyeball are a group, or the right corner of the left eyeball and the left corner of the right eyeball are a group; preferably, the coordinates of the left eye corner and the right eye corner of the left eyeball are a second group of coordinate points, and the corresponding second group of coordinate points in the first group of coordinate points are the coordinates of the left eye corner of the left eyeball, and the corresponding second group of coordinate points in the first group of coordinate points are the coordinates of the right eye corner of the right eyeball; the vectors corresponding to the first group of coordinate points and the second group of coordinate points comprise a first vector corresponding to the center point of the iris of the left eyeball and the left eye angular coordinate of the left eyeball, and a second vector corresponding to the center point of the iris of the right eyeball and the right eye angular coordinate of the right eyeball; calculating a first vector and a second vector respectively, and analyzing and obtaining the gazing angles of the left eyeball and the right eyeball according to the change of the first vector and the second vector; the distance between the left eyeball and the right eyeball and the gazed point, such as books or screens, of the gazed objects can be further obtained according to the gazing angle analysis of the left eyeball and the right eyeball.
Referring to fig. 10, the first embodiment or the second embodiment of the present invention may optionally include an eye distance unit 30 including a depth induction calculation subunit 301 and an eye distance replacement subunit 302, wherein:
the depth induction calculation subunit 301 is configured to calculate imaging distances between the left and right eyeballs and the imaging unit according to a camera depth induction algorithm; the eye distance replacing subunit 302 is configured to calculate a left eye distance and a right eye distance between the left and right eyeballs and a gaze point according to the imaging distance and the gaze angle corresponding to the left and right eyeballs, respectively. Referring to fig. 14, the distance between a certain point in an image and a camera is calculated by adopting the stereoscopic image depth induction of a double-camera, or the distance can be realized by adopting a depth induction intelligent camera or other cameras which can realize a camera depth induction algorithm in the prior art; the left imaging distance and the right imaging distance corresponding to the left eyeball and the right eyeball are calculated, and the left eye distance and the right eye distance between the left eyeball and the right eyeball and the gaze point can be calculated by the gaze angle of the left eyeball and the right eyeball calculated by the calculation.
Preferably, the analysis feedback unit 40 includes a left and right eyeball distance comparing unit for comparing the eye using distances of the left and right eyeballs to calculate an uneven value between the eye using distances of the left and right eyeballs; the non-average value is the difference between the left eye distance and the right eye distance.
More preferably, the analysis feedback unit 40 further includes a time recording subunit for recording the stay time of the eye distance of the left and right eyeballs, respectively; the stay time means the stay time of left and right eyeballs on the fixation point.
Referring to fig. 11, the analysis feedback unit 40 further includes a first analysis feedback subunit 401, a second analysis feedback subunit 402, and a third analysis feedback subunit 403, which are optional in the first embodiment or the second embodiment of the present invention; wherein:
the first analysis feedback subunit 401 is configured to analyze and determine whether the eye distance between the left and right eyeballs exceeds a preset distance threshold, and if yes, generate first feedback reminding information; the second analysis feedback subunit 402 is configured to analyze and determine whether the uneven value exceeds a preset uneven value threshold if the eye distance does not exceed the distance threshold, and if so, generate a second feedback reminding message; the third analysis feedback subunit 403 is configured to analyze and determine whether the residence time exceeds a preset time threshold if the non-average value does not exceed the non-average value threshold, and if so, generate third feedback reminding information.
Whether the left eye distance and the right eye distance of the left eyeball and the right eyeball respectively exceed a preset distance threshold is judged through analysis, so that whether the user is too close to the watched object is judged, if one eye exceeds the distance threshold of a preset safety distance, first feedback reminding information is generated, and the first feedback reminding information is used for reminding the user that the safety watched distance is exceeded, so that the user can correct the watched object or a guardian can correct the watched object. Teenagers and children often suffer from myopia due to fatigue, uneven distance between left eyes and books caused by leaning on one side of the body and poor sitting postures, excessive fatigue of one of eyes caused by uneven use of the eyes for a long time, and inaccurate diopter of the other eye caused by long time. Judging whether the gazing posture of the user is not standard enough or not by analyzing and judging the uneven value of the eye-using distance between the left eyeball and the right eyeball, so that when the uneven value is detected to exceed a preset uneven value threshold, second feedback reminding information is generated, and the user is further reminded of adjusting the corresponding gazing posture through the second feedback reminding information; when the stay time of the user gazing exceeds the preset distance threshold, corresponding third feedback reminding information is generated and used for reminding the user to take proper rest or transfer gazing objects and the like so as to relieve eye fatigue.
Referring to fig. 12, the image capturing and identifying unit 10 includes an image capturing unit 101, a face image subunit 102, and a face identifying subunit 103, where:
the image capturing unit 101 is used for capturing a main image reflected in the lens in real time; the face image subunit 102 is configured to parse and identify at least one face image in the main image; the face recognition subunit 103 is configured to match the face image with a pre-stored face image in a pre-stored user database, so as to obtain corresponding user identity information. Through the face recognition function, the system can detect whether the acquired eye distance, eye time and left-right eye distance non-average value belong to the same user or not, and the reliability of data is improved.
Preferably, two cameras with the same configuration are adopted to form a remote camera unit 101; the left and right rows of the two cameras are fixed (as shown in fig. 14), and the upward-turning angle of the lens is adjusted to be capable of shooting the whole face (as 30-degree elevation angle) of the user; the camera unit 101 is placed above the object to be watched (such as a book) and is on the same horizontal plane as the book (such as a desk top); the objects to be looked at include, but are not limited to, electronic and non-electronic books, writing books, objects, and the like. The two cameras synchronously shoot images of a user; the two user images are transmitted to corresponding functional units for the next operation through a wireless or wired communication network.
Referring to fig. 15, using the prior art disclosed eye tracking technique algorithm, the gaze angle (+—feb) of one of the user's eyes is calculated from the eye position coordinates; according to the existing disclosed double-lens stereoscopic image depth algorithm, calculating the distance (z) between the eyeball and the camera; calculating the angle (< ECF) between the eyeball and the camera according to the coordinate position of the eyeball on the photo; calculating the vertical height (h) of the eyeball through the distance (z) between the eyeball and the camera and the angle (< ECF) between the eyeball and the camera; if the value of < FEB is smaller than the value of (90 DEG-ECF), namely, the eye point of the user possibly falls on the book on the desktop, the system calculates the distance (t 1) between the eyeball and the eye point of the book, and the value of t1 is the eye distance of the eye; similarly, the distance (t 2) between the other eye of the user and the eye point of the book is calculated, and the value of t2 is the eye distance of the eye.
The system according to the first embodiment or the second embodiment of the present invention may be a computer, a mobile phone, an electronic reader, a tablet computer, an embedded system, or other computer, etc.
Fig. 1 shows a myopia prevention method based on a video sensing technology according to a first embodiment of the present invention, comprising the steps of:
S101: a camera shooting identification step, namely shooting a main image in real time and identifying and acquiring a face image in the main image;
s102: recognizing and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates;
s103: calculating the imaging distance between the left and right eyeballs and the imaging unit, and respectively calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle;
s104: and analyzing whether the eye distance and the stay time of the eye distance exceed corresponding preset prevention thresholds, and if so, generating feedback reminding information to remind a user. The purpose of feedback is to prompt the user to adjust the distance between eyes and the book in time, correct sitting posture, and avoid myopia caused by too close distance between eyes, too long time of eyes and/or uneven eyes of left and right eyes; the system protects the vision health of the user by improving the reading or writing posture of the user.
The method is mainly applied to preventing myopia of teenagers and children. Through the eyeball tracking technology, the eye watching angle and duration of the students in the class and in the course of making homework can be continuously recorded and analyzed, and whether the students watch books or a classroom blackboard in front or the eyes can not pay attention to everywhere can be judged, so that the data of the attention degree of the students in the course of learning can be obtained; the system can report the concentration degree of the students to teachers or parents, so that the teachers properly adjust the teaching mode, the parents properly encourage the students, the learning concentration capacity of the students is effectively cultivated, and the learning performance of the students is improved in long term.
Fig. 2 shows a myopia prevention method based on a video sensing technology according to a second embodiment of the present invention, comprising the steps of:
s101: a camera shooting identification step, namely shooting a main image in real time and identifying and acquiring a face image in the main image;
s102: recognizing and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates;
s103: calculating the imaging distance between the left and right eyeballs and the imaging unit, and respectively calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle;
s104: analyzing whether the eye distance and the stay time of the eye distance exceed corresponding preset prevention thresholds or not, and if so, generating feedback reminding information to remind a user;
s105: and receiving the feedback reminding information and triggering generation of multimedia display information and/or function operation signals according to the feedback reminding information. The multimedia information presentation information comprises but is not limited to sound, voice, light, image signals, video signals, and/or exercise programs for eye care and neck and shoulder relaxation, and the like, and the function operation signals comprise but are not limited to screen display, function operation, application programs, computer software and/or prompt information sending to electronic equipment of a preset third party account (such as a teacher account and/or a parent account of a student) and the like.
Optionally, referring to fig. 3, the step S102 includes:
s1021: analyzing and identifying left and right eyeballs in the face image;
s1022: a first group of coordinate points of iris center points of the left and right eyeballs and a second group of coordinate points of opposite symmetrical eye corners of the left and right eyeballs are respectively positioned in the face image;
s1023: and calculating vectors corresponding to the first group of coordinate points and the second group of coordinate points of the left and right eyeballs, and analyzing the changes of the vectors to calculate the gazing angles of the left and right eyeballs.
Optionally, referring to fig. 4, the step S103 includes:
s1031: according to a camera depth sensing algorithm, calculating the shooting distance between the left eyeball and the right eyeball and the shooting unit respectively;
s1032: and respectively calculating the left eye distance and the right eye distance between the left eyeball and the right eyeball and the fixation point according to the shooting distance and the fixation angle corresponding to the left eyeball and the right eyeball.
Referring to fig. 15, using the prior art disclosed eye tracking technique algorithm, the gaze angle (+—feb) of one of the user's eyes is calculated from the eye position coordinates; according to the existing disclosed double-lens stereoscopic image depth algorithm, calculating the distance (z) between the eyeball and the camera; calculating the angle (< ECF) between the eyeball and the camera according to the coordinate position of the eyeball on the photo; calculating the vertical height (h) of the eyeball through the distance (z) between the eyeball and the camera and the angle (< ECF) between the eyeball and the camera; if the value of < FEB is smaller than the value of (90 DEG-ECF), namely, the eye point of the user possibly falls on the book on the desktop, the system calculates the distance (t 1) between the eyeball and the eye point of the book, and the value of t1 is the eye distance of the eye; similarly, the distance (t 2) between the other eye of the user and the eye point of the book is calculated, and the value of t2 is the eye distance of the eye.
Preferably, the step S104 includes:
a left-right eyeball distance comparison step of comparing the eye use distances of the left-right eyeballs to calculate an uneven value between the eye use distances of the left-right eyeballs. The system calculates the difference between t1 and t2, and the acquired data is the uneven left and right eye distance of the eyes of the user.
More preferably, the step S104 includes:
and a time recording step of recording the stay time of the eye distance of the left and right eyeballs.
Optionally, referring to fig. 5, the step S104 further includes:
s1041: analyzing and judging whether the eye distance of the left eyeball and the right eyeball exceeds a preset distance threshold, and if so, generating first feedback reminding information;
s1042: if the eye distance does not exceed the distance threshold, analyzing and judging whether the uneven value exceeds a preset uneven value threshold, and if so, generating second feedback reminding information;
s1043: if the non-average value does not exceed the non-average value threshold value, analyzing and judging whether the residence time exceeds a preset time threshold value, and if so, generating third feedback reminding information.
Alternatively to the first embodiment or the second embodiment of the present invention, referring to fig. 6, the step S101 includes:
S1011: capturing a main image reflected in a lens in real time;
s1012: analyzing and identifying at least one face image in the main image;
s1013: and matching the face image with a pre-stored face image in a pre-stored user database to obtain corresponding user identity information. Because the system has the face recognition function, the system can detect whether the acquired eye distance, eye time and left and right eye distance non-average values belong to the same user, and the reliability of data is improved.
In a preferred embodiment of the present invention, as shown in fig. 13, the distances between the left and right eyes and the book are calculated by tracking the eyes of the user through video by using depth sensing and eye tracking technology, and meanwhile, according to the eye protection criteria for preventing myopia, whether the eye distance, the uneven left and right eye distance and the residence time of the user are appropriate or not is determined; if the user is detected to read or write for a long time and a short distance, the system can give a prompt to the user in real time, so that the purpose of helping teenagers and children prevent myopia is achieved.
The method described in the first embodiment or the second embodiment of the present invention can be applied to two scenes of school class room and family learning; through the continuous detection of the eye distance, the eye time and the uneven left and right eye distances of students, prompts are given to the students, teachers or parents in real time, the students are prevented from reading and writing in short distance for a long time, the effect of preventing myopia is achieved, the students can be helped to establish correct reading or writing postures for long time, and vision health is protected. According to the eye distance and the eye time data of the students, a teacher can properly adjust courses, and long-time reading and writing of the students are avoided; parents can manage and teach at home in time to prevent eyes of children from approaching books too much; for students, the system can compete and remind the students by setting up a correct eye competition, and the system silently cultures that the students have a correct eye habit when reading and writing. And the method can also judge whether the student looks at books or blackboard in a classroom or the eyes cannot concentrate everywhere by recording and analyzing the watching angle and duration of the eyes of the student in the class and in the course of making a lesson, so as to obtain the data of the concentration degree of the student in the study.
The invention can know the change of the learning attitude of the student by comparing the historical record data of the eye concentration degree of the student. If the concentration degree is improved, the learning interest of the students can be indicated to be improved; conversely, it may indicate that the student is not interested in learning content. On one hand, the system can report to a teacher or a parent, so that the teacher or the parent can timely make proper encouragement and teaching; on the other hand, by knowing the learning interests of the students, the system can help teachers find teaching modes suitable for the students, parents find ways of promoting the learning concentration of the children, and the students find learning interests.
Preferably, the invention is applicable to measures for preventing teenager students from indulging in games. The student can pay attention to the game screen for a long time and in a short distance, eye fatigue can be caused, myopia is caused, the system can send out a warning, even send out a function operation feedback signal, and the game display of the screen of the electronic equipment is directly closed.
Alternatively, the invention may be applied to the treatment of patients with overactive and read-write disorders. By analyzing the user's read-write pattern and concentration data, the therapist is aided in finding the proper and effective training.
Optionally, the invention can also be applied to interactive exhibition, and the attractiveness of the exhibited article is known by detecting the reading mode and the concentration degree of the exhibited article when the visitor views the exhibited article, and the feedback signal can be sent out to adjust the multimedia information display so as to carry out exhibition interaction with the visitor in real time.
Optionally, in the aspect of commodity popularization and application, the method can utilize a depth sensing technology to calculate the distance between eyes of a consumer and the commodity, and acquire the attention degree of the consumer to the commodity by detecting the shortening distance and the shortening speed of the consumer watching the commodity.
Optionally, in the aspect of interactive advertisement application, the invention can send a feedback signal according to the distance of the eye of the consumer watching the advertisement, adjust the multimedia information display, and adjust the interactive advertisement according to the distance of the consumer watching the advertisement.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present application may be executed by a processor to implement the above steps or functions. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
The method according to the invention may be implemented as a computer implemented method on a computer, or in dedicated hardware, or in a combination of both. Executable code or parts thereof for the method according to the invention may be stored on a computer program product. Examples of computer program products include memory devices, optical storage devices, integrated circuits, servers, online software, and the like. Preferably, the computer program product comprises non-transitory program code means stored on a computer readable medium for performing the method according to the invention when said program product is executed on a computer.
In a preferred embodiment the computer program comprises computer program code means adapted to perform all the steps of the method according to the invention when the computer program is run on a computer. Preferably, the computer program is embodied on a computer readable medium.
In summary, according to the method based on the video sensing technology, the actual distance between the left eye and the right eye of the user and the gazing point, such as the gazing point on the desktop book, is detected, so that whether the eyes of the user are too close to the book during reading or writing is accurately judged. The system provides an automatic system for acquiring depth sensing data without first checking by a user, and automatically judges whether the eye distance of the user exceeds a preset myopia prevention threshold according to the eye protection criteria of myopia prevention. If the user is beyond, the system prompts the user in real time. Another advantage of the present technology is: even if the user moves the book during operation, and changes the book position, the system can still update the actual eye distance between the eyes of the user and the focused point on the book in real time.
Of course, the present invention is capable of other various embodiments and its several details are capable of modification and variation in light of the present invention, as will be apparent to those skilled in the art, without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A myopia prevention method based on a video induction technology, comprising the steps of:
a camera shooting identification step, namely shooting a main image in real time and identifying and acquiring a face image in the main image;
an eyeball tracking step of identifying and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates;
an eye distance step of calculating the imaging distance between the left and right eyeballs and the imaging unit, and calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle;
analyzing and feeding back, namely analyzing whether the eye distance and the stay time of the eye distance exceed corresponding preset prevention thresholds or not, and if yes, generating feedback reminding information to remind a user;
the eyeball tracking step includes:
An eyeball identification step of analyzing and identifying left and right eyeballs in the face image;
an eyeball coordinate step of respectively positioning a first group of coordinate points of iris center points of the left eyeball and the right eyeball and a second group of coordinate points of symmetrical eye corners of the left eyeball and the right eyeball in the face image;
a gazing angle step of calculating vectors corresponding to the first group of coordinate points and the second group of coordinate points of the left and right eyeballs, and analyzing changes of the vectors to calculate gazing angles of the left and right eyeballs;
the eye distance step comprises the following steps:
a depth induction calculation step of calculating the shooting distance between the left eyeball and the right eyeball and the shooting unit according to a camera depth induction algorithm;
and an eye distance conversion step of calculating a left eye distance and a right eye distance between the left and right eyeballs and a fixation point according to the imaging distance and the fixation angle corresponding to the left and right eyeballs.
2. The myopia prevention method according to claim 1, wherein the analyzing feedback step further comprises:
and a feedback information processing step, namely receiving the feedback reminding information and triggering and generating multimedia display information and/or functional operation signals according to the feedback reminding information.
3. The myopia prevention method according to claim 1, wherein the analyzing and feeding back step comprises:
a left-right eyeball distance comparison step of comparing the eye use distances of the left-right eyeballs to calculate an uneven value between the eye use distances of the left-right eyeballs; and/or
And a time recording step of recording the stay time of the eye distance of the left and right eyeballs.
4. A method of myopia prevention based on video induction technology according to claim 3, wherein the step of analyzing feedback further comprises:
a first analysis feedback step of analyzing and judging whether the eye distance of the left and right eyeballs exceeds a preset distance threshold, and if so, generating first feedback reminding information;
a second analysis feedback step of analyzing and judging whether the uneven value exceeds a preset uneven value threshold or not if the eye distance does not exceed the distance threshold, and generating second feedback reminding information if the eye distance does not exceed the distance threshold;
and a third analysis feedback step, wherein if the uneven value does not exceed the uneven value threshold, whether the residence time exceeds a preset time threshold is analyzed and judged, and if so, third feedback reminding information is generated.
5. The myopia prevention method according to claim 1, wherein the image capturing and recognizing step comprises:
a real-time shooting step of shooting a main image reflected in the lens in real time;
a face image step of analyzing and identifying at least one face image in the main image;
and a face recognition step, namely matching the face image with a pre-stored face image in a pre-stored user database to obtain corresponding user identity information in a matching way.
6. A myopia prevention system based on video induction technology, comprising:
the camera shooting identification unit is used for shooting a main image in real time and identifying and acquiring a face image in the main image;
an eyeball tracking unit, which is used for identifying and calculating position coordinates of left and right eyeballs in the face image, and calculating the gazing angles of the left and right eyeballs through the position coordinates;
an eye distance unit for calculating the imaging distance between the left and right eyeballs and the imaging unit, and calculating the eye distance between the left and right eyeballs and the gaze point according to the imaging distance and the gaze angle;
the analysis feedback unit is used for analyzing whether the eye use distance and the stay time of the eye use distance exceed corresponding preset prevention thresholds or not, and if yes, feedback reminding information is generated to remind a user;
The eye tracking unit includes:
the eyeball identification subunit is used for analyzing and identifying left and right eyeballs in the face image;
an eyeball coordinate subunit, configured to respectively locate, in the face image, a first set of coordinate points of iris center points of the left and right eyeballs and a second set of coordinate points of symmetrical corners of the left and right eyeballs;
a gaze angle subunit, configured to calculate vectors corresponding to the first set of coordinate points and the second set of coordinate points of the left and right eyeballs, and analyze changes of the vectors to calculate gaze angles of the left and right eyeballs;
the eye distance unit includes:
a depth induction calculation subunit, configured to calculate imaging distances between the left and right eyeballs and the imaging unit according to a camera depth induction algorithm;
and the eye-using distance changing sub-unit is used for respectively calculating the left eye-using distance and the right eye-using distance between the left eyeball and the right eyeball and the fixation point according to the shooting distance and the fixation angle corresponding to the left eyeball and the right eyeball.
7. The myopia prevention system according to claim 6, further comprising:
and the feedback information processing unit is used for receiving the feedback reminding information and triggering and generating multimedia display information and/or functional operation signals according to the feedback reminding information.
8. The myopia prevention system according to claim 6, wherein the analysis feedback unit comprises:
a left-right eyeball distance comparison subunit for comparing the eye-using distances of the left and right eyeballs to calculate an uneven value between the eye-using distances of the left and right eyeballs; and/or
And the time recording subunit is used for respectively recording the stay time of the eye using distance of the left eyeball and the right eyeball.
9. The myopia prevention system according to claim 8, wherein the analysis feedback unit further comprises:
the first analysis feedback subunit is used for analyzing and judging whether the eye distance between the left eyeball and the right eyeball exceeds a preset distance threshold, and if so, generating first feedback reminding information;
the second analysis feedback subunit is used for analyzing and judging whether the uneven value exceeds a preset uneven value threshold or not if the eye distance does not exceed the distance threshold, and generating second feedback reminding information if the eye distance does not exceed the distance threshold;
and the third analysis feedback subunit is used for analyzing and judging whether the residence time exceeds a preset time threshold or not if the non-average value does not exceed the non-average value threshold, and generating third feedback reminding information if the residence time exceeds the preset time threshold.
10. The myopia prevention system according to claim 6, wherein the camera recognition unit comprises:
the camera shooting unit is used for shooting a main image reflected in the lens in real time;
a face image subunit, configured to parse and identify at least one face image in the main image;
and the face recognition subunit is used for matching the face image with a pre-stored face image in a pre-stored user database so as to obtain corresponding user identity information in a matching way.
CN202010174241.3A 2020-03-12 2020-03-12 Myopia prevention method and system based on video induction technology Active CN111444789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010174241.3A CN111444789B (en) 2020-03-12 2020-03-12 Myopia prevention method and system based on video induction technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010174241.3A CN111444789B (en) 2020-03-12 2020-03-12 Myopia prevention method and system based on video induction technology

Publications (2)

Publication Number Publication Date
CN111444789A CN111444789A (en) 2020-07-24
CN111444789B true CN111444789B (en) 2023-06-20

Family

ID=71654046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010174241.3A Active CN111444789B (en) 2020-03-12 2020-03-12 Myopia prevention method and system based on video induction technology

Country Status (1)

Country Link
CN (1) CN111444789B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114002697A (en) * 2020-07-28 2022-02-01 上海市眼病防治中心 Eye distance real-time tracking detection method and device
CN112183443A (en) * 2020-10-14 2021-01-05 歌尔科技有限公司 Eyesight protection method and device and intelligent glasses
CN113591658B (en) * 2021-07-23 2023-09-05 深圳全息信息科技发展有限公司 Eye protection system based on distance sensing
CN115116088A (en) * 2022-05-27 2022-09-27 中国科学院半导体研究所 Myopia prediction method, apparatus, storage medium, and program product
CN115035588A (en) * 2022-05-31 2022-09-09 中国科学院半导体研究所 Eyesight protection prompting method, device, storage medium and program product

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138119A (en) * 2015-08-04 2015-12-09 湖南七迪视觉科技有限公司 Stereo vision system with automatic focusing controlled based on human biometrics
CN105205438A (en) * 2014-09-05 2015-12-30 北京七鑫易维信息技术有限公司 Method of using infrared eyeball to track and control distance of eyes and screen and system thereof
CN105303091A (en) * 2015-10-23 2016-02-03 广东小天才科技有限公司 Eyeball tracking technology based privacy protection method and system
CN106781324A (en) * 2017-01-09 2017-05-31 海南易成长科技有限公司 Vertebra system for prompting and light fixture are protected in a kind of eyeshield
WO2017152649A1 (en) * 2016-03-08 2017-09-14 珠海全志科技股份有限公司 Method and system for automatically prompting distance from human eyes to screen
CN107273814A (en) * 2017-05-24 2017-10-20 中广热点云科技有限公司 The regulation and control method and regulator control system of a kind of screen display
CN110213568A (en) * 2019-04-16 2019-09-06 浙江大学 A kind of eyes protecting system based on augmented reality and data analysis
WO2020020022A1 (en) * 2018-07-25 2020-01-30 卢帆 Method for visual recognition and system thereof
WO2020042542A1 (en) * 2018-08-31 2020-03-05 深圳市沃特沃德股份有限公司 Method and apparatus for acquiring eye movement control calibration data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107340849A (en) * 2016-04-29 2017-11-10 和鑫光电股份有限公司 Mobile device and its eyeshield control method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205438A (en) * 2014-09-05 2015-12-30 北京七鑫易维信息技术有限公司 Method of using infrared eyeball to track and control distance of eyes and screen and system thereof
CN105138119A (en) * 2015-08-04 2015-12-09 湖南七迪视觉科技有限公司 Stereo vision system with automatic focusing controlled based on human biometrics
CN105303091A (en) * 2015-10-23 2016-02-03 广东小天才科技有限公司 Eyeball tracking technology based privacy protection method and system
WO2017152649A1 (en) * 2016-03-08 2017-09-14 珠海全志科技股份有限公司 Method and system for automatically prompting distance from human eyes to screen
CN106781324A (en) * 2017-01-09 2017-05-31 海南易成长科技有限公司 Vertebra system for prompting and light fixture are protected in a kind of eyeshield
CN107273814A (en) * 2017-05-24 2017-10-20 中广热点云科技有限公司 The regulation and control method and regulator control system of a kind of screen display
WO2020020022A1 (en) * 2018-07-25 2020-01-30 卢帆 Method for visual recognition and system thereof
WO2020042542A1 (en) * 2018-08-31 2020-03-05 深圳市沃特沃德股份有限公司 Method and apparatus for acquiring eye movement control calibration data
CN110213568A (en) * 2019-04-16 2019-09-06 浙江大学 A kind of eyes protecting system based on augmented reality and data analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
严德赛 ; 曾诚 ; .基于数字图像处理的眼球控制精度提高方法.计算机应用.2018,(10),全文. *
苏海明 ; 侯振杰 ; 梁久祯 ; 许艳 ; 李兴 ; .使用人眼几何特征的视线追踪方法.中国图象图形学报.2019,(06),全文. *

Also Published As

Publication number Publication date
CN111444789A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN111444789B (en) Myopia prevention method and system based on video induction technology
CN107929007B (en) Attention and visual ability training system and method using eye tracking and intelligent evaluation technology
WO2021138964A1 (en) Read/write distance identification method based on smart watch
US8243132B2 (en) Image output apparatus, image output method and image output computer readable medium
US7506979B2 (en) Image recording apparatus, image recording method and image recording program
US20220167877A1 (en) Posture Analysis Systems and Methods
US8150118B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
US9498123B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
CN106911962B (en) Scene-based mobile video intelligent playing interaction control method
CN109685007B (en) Eye habit early warning method, user equipment, storage medium and device
KR20100016696A (en) Student learning attitude analysis systems in virtual lecture
CN111990805A (en) Desktop device and method for protecting eyesight of students during learning
US20170156585A1 (en) Eye condition determination system
CN106781324A (en) Vertebra system for prompting and light fixture are protected in a kind of eyeshield
CN115599219B (en) Eye protection control method, system and equipment for display screen and storage medium
US20110279665A1 (en) Image recording apparatus, image recording method and image recording program
CN110251070A (en) It is a kind of to use eye health condition monitoring method and system
CN110148092A (en) The analysis method of teenager&#39;s sitting posture based on machine vision and emotional state
Chukoskie et al. Quantifying gaze behavior during real-world interactions using automated object, face, and fixation detection
CN114120357B (en) Neural network-based myopia prevention method and device
KR102245319B1 (en) System for analysis a concentration of learner
CN104867361A (en) Intelligent terminal for interactional and situational teaching
CN113903161A (en) Sitting posture reminding device and method
CN111582003A (en) Sight tracking student classroom myopia prevention system
Guo et al. Using face and object detection to quantify looks during social interactions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant