CN109685007B - Eye habit early warning method, user equipment, storage medium and device - Google Patents

Eye habit early warning method, user equipment, storage medium and device Download PDF

Info

Publication number
CN109685007B
CN109685007B CN201811588201.2A CN201811588201A CN109685007B CN 109685007 B CN109685007 B CN 109685007B CN 201811588201 A CN201811588201 A CN 201811588201A CN 109685007 B CN109685007 B CN 109685007B
Authority
CN
China
Prior art keywords
preset
user
eye
image
sitting posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811588201.2A
Other languages
Chinese (zh)
Other versions
CN109685007A (en
Inventor
吴镝锋
张佩华
杨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kangkang Network Technology Co ltd
Original Assignee
Shenzhen Kangkang Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kangkang Network Technology Co ltd filed Critical Shenzhen Kangkang Network Technology Co ltd
Priority to CN201811588201.2A priority Critical patent/CN109685007B/en
Publication of CN109685007A publication Critical patent/CN109685007A/en
Application granted granted Critical
Publication of CN109685007B publication Critical patent/CN109685007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an eye habit early warning method, user equipment, a storage medium and a device. According to the invention, image shooting is carried out on a preset area according to a preset period so as to obtain an image to be analyzed; determining user sitting posture information corresponding to each preset user according to the image to be analyzed, wherein the preset users are in the space range of the preset area; matching the sitting posture information of the user with a preset sitting posture; and when the sitting posture information of the user is not matched with the preset sitting posture, generating first eye early warning information so as to early warn bad eye habit through the first eye early warning information. The invention can simultaneously present a plurality of preset users in the preset area shot, and can simultaneously match the sitting postures so as to give out early warning information, so that monitoring and early warning can be simultaneously carried out on a plurality of people, and the technical problem that the monitoring and early warning mode for the habit of eyes cannot be simultaneously carried out on a plurality of people is solved.

Description

Eye habit early warning method, user equipment, storage medium and device
Technical Field
The present invention relates to the field of monitoring and early warning technologies, and in particular, to an eye habit early warning method, a user device, a storage medium, and a device.
Background
Considering that the population suffering from ametropia has a high proportion in the general population of China, and the clinical manifestations of ametropia are largely manifested as myopia. For children and teenagers, incorrect sitting and prolonged use of the eyes are the main causes of myopia.
Therefore, if the eye habit of children and teenagers is monitored in real time, early-warned in time and effectively prevented, the prevalence of myopia can be obviously reduced.
However, most eye habit monitoring and early warning modes are only used for a single student, for example, the student can wear glasses to monitor and early warning the eye habit, and equipment installed on the desktop of the student is used for monitoring and early warning, and cannot be used for a plurality of students at the same time.
Therefore, the eye habit monitoring and early warning mode has the technical problem that monitoring and early warning cannot be carried out on multiple people at the same time.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide an eye habit early warning method, user equipment, storage medium and device, and aims to solve the technical problem that an eye habit monitoring early warning mode cannot monitor and warn a plurality of people at the same time.
In order to achieve the above purpose, the present invention provides an eye habit pre-warning method, which includes the following steps:
shooting an image of a preset area according to a preset period to obtain an image to be analyzed;
determining user sitting posture information corresponding to each preset user according to the image to be analyzed, wherein the preset users are in the space range of the preset area;
matching the sitting posture information of the user with a preset sitting posture;
and when the sitting posture information of the user is not matched with the preset sitting posture, generating first eye early warning information so as to early warn bad eye habit through the first eye early warning information.
Preferably, after the image capturing is performed on the preset area according to the preset period to obtain the image to be analyzed, the early warning method for eye habit further includes:
and reading a preset user group from a first preset user information database, and determining each preset user in the preset user group.
Preferably, after determining the sitting posture information of the user corresponding to each preset user according to the image to be analyzed, the eye habit early warning method further includes:
Determining corresponding continuous eye use time according to the sitting posture changing moment in the sitting posture information of the user;
comparing the continuous eye duration with a preset eye duration range;
and when the continuous eye use duration is not in the preset eye use duration range, generating second eye use early warning information so as to perform early warning of bad eye use habit through the second eye use early warning information.
Preferably, when the continuous eye duration is not within the preset eye duration range, generating second eye use early warning information, so as to perform early warning of bad eye use habit through the second eye use early warning information, and the eye use habit early warning method further includes:
reading user personal information of a preset user from a second preset user information database;
generating an eye statistics report according to the personal information of the user, the sitting posture information of the user, a matching result of the sitting posture information of the user and the preset sitting posture, the continuous eye duration and a comparison result of the continuous eye duration and the preset eye duration range.
Preferably, the determining, according to the image to be analyzed, user sitting posture information corresponding to each preset user respectively includes:
Reading a preset face image of a preset user from a third preset user information database;
identifying a preset user corresponding to the preset face image in the image to be analyzed;
dividing the image to be analyzed according to the preset face image to obtain an attitude image to be analyzed corresponding to the preset user;
inputting the gesture image to be analyzed into a preset training convolutional neural network to obtain user sitting posture information which is output by the preset training convolutional neural network and corresponds to the preset user.
Preferably, before the image capturing is performed on the preset area according to the preset period to obtain the image to be analyzed, the early warning method for eye habit further includes:
shooting an image of a preset area to obtain a sample image;
identifying a preset face image of a preset user in the sample image;
dividing the sample image according to a preset face image of the preset user to obtain a sample posture image corresponding to the preset user;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
Preferably, after the sample image is segmented according to the preset face image of the preset user to obtain a sample pose image corresponding to the preset user, the eye habit early warning method further includes:
acquiring label information;
marking the sample gesture image as a standard gesture image in a preset initial convolutional neural network through the label information;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network, wherein the training comprises the following steps:
training the preset initial convolutional neural network through the standard gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
In addition, in order to achieve the above objective, the present invention also proposes a user equipment, which includes a camera, a memory, a processor, and an eye habit pre-warning program stored on the memory and capable of running on the processor, where the eye habit pre-warning program is configured to implement the steps of the eye habit pre-warning method as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon an eye habit pre-warning program, which when executed by a processor, implements the steps of the eye habit pre-warning method as described above.
In addition, in order to achieve the above purpose, the present invention also provides an eye habit pre-warning device, where the eye habit pre-warning device includes:
the image shooting module is used for shooting images of a preset area according to a preset period to obtain an image to be analyzed;
the sitting posture determining module is used for determining user sitting posture information corresponding to each preset user according to the image to be analyzed, and the preset users are in the space range of the preset area;
the sitting posture matching module is used for matching the sitting posture information of the user with a preset sitting posture;
and the eye use early warning module is used for generating first eye early warning information when the sitting posture information of the user is not matched with the preset sitting posture so as to early warn bad eye use habit through the first eye early warning information.
The method comprises the steps of firstly shooting an image of a preset area to obtain an image to be analyzed, determining user sitting posture information corresponding to each preset user according to the image to be analyzed, matching the user sitting posture information with the preset sitting posture, and generating first eye early warning information when the user sitting posture information is not matched with the preset sitting posture so as to early warn bad eye using habit through the first eye early warning information. Obviously, a plurality of preset users can exist in the preset area shot by the invention, and the invention can match sitting postures at the same time so as to give out early warning information, so that monitoring and early warning can be carried out on a plurality of people at the same time, and the technical problem that the monitoring and early warning mode for eye habit can not be carried out on a plurality of people at the same time is solved.
Drawings
FIG. 1 is a schematic diagram of a user equipment architecture of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flow chart of a first embodiment of an eye habit pre-warning method according to the present invention;
FIG. 3 is a flowchart of a second embodiment of an eye habit pre-warning method according to the present invention;
FIG. 4 is a flowchart of a third embodiment of an eye habit pre-warning method according to the present invention;
fig. 5 is a block diagram of a first embodiment of an eye habit warning device according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a user equipment structure of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the user equipment may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), and the optional user interface 1003 may also include a standard wired interface, a wireless interface, and the wired interface for the user interface 1003 may be a USB interface in the present invention. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 1 is not limiting and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and an eye-habit early warning program may be included in a memory 1005 as a computer storage medium.
In the ue shown in fig. 1, the network interface 1004 is mainly used to connect to a background server, and perform data communication with the background server; the user interface 1003 is mainly used for connecting peripherals; the user device invokes the eye habit pre-warning program stored in the memory 1005 through the processor 1001, and performs the following operations:
shooting an image of a preset area according to a preset period to obtain an image to be analyzed;
determining user sitting posture information corresponding to each preset user according to the image to be analyzed, wherein the preset users are in the space range of the preset area;
matching the sitting posture information of the user with a preset sitting posture;
and when the sitting posture information of the user is not matched with the preset sitting posture, generating first eye early warning information so as to early warn bad eye habit through the first eye early warning information.
Further, the processor 1001 may call the eye habit pre-warning program stored in the memory 1005, and further perform the following operations:
and reading a preset user group from a first preset user information database, and determining each preset user in the preset user group.
Further, the processor 1001 may call the eye habit pre-warning program stored in the memory 1005, and further perform the following operations:
determining corresponding continuous eye use time according to the sitting posture changing moment in the sitting posture information of the user;
comparing the continuous eye duration with a preset eye duration range;
and when the continuous eye use duration is not in the preset eye use duration range, generating second eye use early warning information so as to perform early warning of bad eye use habit through the second eye use early warning information.
Further, the processor 1001 may call the eye habit pre-warning program stored in the memory 1005, and further perform the following operations:
reading user personal information of a preset user from a second preset user information database;
generating an eye statistics report according to the personal information of the user, the sitting posture information of the user, a matching result of the sitting posture information of the user and the preset sitting posture, the continuous eye duration and a comparison result of the continuous eye duration and the preset eye duration range.
Further, the processor 1001 may call the eye habit pre-warning program stored in the memory 1005, and further perform the following operations:
reading a preset face image of a preset user from a third preset user information database;
identifying a preset user corresponding to the preset face image in the image to be analyzed;
dividing the image to be analyzed according to the preset face image to obtain an attitude image to be analyzed corresponding to the preset user;
inputting the gesture image to be analyzed into a preset training convolutional neural network to obtain user sitting posture information which is output by the preset training convolutional neural network and corresponds to the preset user.
Further, the processor 1001 may call the eye habit pre-warning program stored in the memory 1005, and further perform the following operations:
shooting an image of a preset area to obtain a sample image;
identifying a preset face image of a preset user in the sample image;
dividing the sample image according to a preset face image of the preset user to obtain a sample posture image corresponding to the preset user;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
Further, the processor 1001 may call the eye habit pre-warning program stored in the memory 1005, and further perform the following operations:
acquiring label information;
marking the sample gesture image as a standard gesture image in a preset initial convolutional neural network through the label information;
accordingly, the following operations are also performed:
training the preset initial convolutional neural network through the standard gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
In this embodiment, an image is captured in a preset area to obtain an image to be analyzed, user sitting posture information corresponding to each preset user is determined according to the image to be analyzed, the user sitting posture information is matched with the preset sitting posture, and when the user sitting posture information is not matched with the preset sitting posture, first eye early warning information is generated to early warn bad eye habit through the first eye early warning information. Obviously, a plurality of preset users can exist in the preset area shot in the embodiment at the same time, in addition, the embodiment can match sitting postures at the same time so as to give out early warning information, and also can monitor and early warn for a plurality of people at the same time, so that the technical problem that the monitoring and early warning mode for the habit of eyes cannot monitor and early warn for a plurality of people at the same time is solved.
Based on the hardware structure, the embodiment of the eye habit early warning method is provided.
Referring to fig. 2, fig. 2 is a flow chart of a first embodiment of the eye habit early warning method according to the present invention.
In a first embodiment, the eye habit pre-warning method includes the following steps:
step S10: and shooting the image of the preset area according to a preset period to obtain an image to be analyzed.
It can be understood that, considering that the user can use the wearable device to perform the monitoring and early warning of the eye habit, the user can also install the monitoring device on the desktop to perform the monitoring and early warning of the eye habit, however, all the monitoring and early warning modes can be implemented only for one person at the same time and cannot be implemented for multiple persons at the same time. If the method aims at multiple people at the same time, the method is troublesome to implement and has high cost.
It should be understood that when the image to be analyzed is shot, the embodiment can shoot multiple people at the same time and monitor the eye habit of the multiple people at the same time, so that the effect of monitoring and early warning for the multiple people at the same time is achieved. Moreover, the user does not need to wear any equipment, and the implementation is simpler.
In a specific implementation, the execution body of the embodiment is a user device, where the user device includes a camera, and the camera may be a high-definition camera. If the user equipment is applied to a scene of a classroom, the camera can be arranged near a blackboard in the classroom, so that the camera can face the front of students and can shoot the faces and sitting postures of the students.
It should be understood that when the applicable scene is a classroom, the preset area is a classroom environment opposite to the camera, and the preset period may be ten seconds, and then the camera may take pictures of the classroom environment every ten seconds, so as to obtain a classroom picture taken by the student, i.e. an image to be analyzed.
Of course, the present embodiment does not limit the captured image to be analyzed to be a picture or a video. If the video is obtained, each frame in the video can be intercepted to obtain a picture.
Step S20: and determining user sitting posture information corresponding to each preset user respectively according to the image to be analyzed, wherein the preset users are in the space range of the preset area.
It can be understood that since images of a plurality of students have been photographed in a picture of a classroom, the sitting postures of the plurality of students can be recognized and recorded as user sitting posture information of each student. The sitting posture information of the user is recorded with the sitting posture information of the preset user.
Step S30: and matching the sitting posture information of the user with a preset sitting posture.
It should be appreciated that in order to facilitate the identification of a bad sitting posture, which in turn causes bad eye use habits, a preset sitting posture identified as a good sitting posture may be preset, which may be referred to as a common good sitting posture requirement. For example, the preset sitting posture may be set such that the upper body of the user's body stands upright and the user's eyes are one foot from the desk top, etc.
It can be understood that the obtained sitting posture information of the user can be matched with the preset sitting posture, if the upper body of the user body recorded in the sitting posture information of the user is vertical and the eyes of the user are exactly one distance from the desktop, the matching can be considered successful, the sitting posture of the user is good, and early warning is not needed; if the posture is different from the preset sitting posture, the posture is determined to be not matched.
Step S40: and when the sitting posture information of the user is not matched with the preset sitting posture, generating first eye early warning information so as to early warn bad eye habit through the first eye early warning information.
It can be appreciated that when there is no match, first eye warning information can be generated, which can be recorded with prompt information for students with incorrect sitting postures. The first eye early warning information can be displayed on a preset webpage for students or teachers to read, decision data support can be provided for schools and related departments, and early warning prompt is further carried out on students with poor eye habit.
In this embodiment, an image is captured in a preset area to obtain an image to be analyzed, user sitting posture information corresponding to each preset user is determined according to the image to be analyzed, the user sitting posture information is matched with the preset sitting posture, and when the user sitting posture information is not matched with the preset sitting posture, first eye early warning information is generated to early warn bad eye habit through the first eye early warning information. Obviously, a plurality of preset users can exist in the preset area shot in the embodiment at the same time, in addition, the embodiment can match sitting postures at the same time so as to give out early warning information, and also can monitor and early warn for a plurality of people at the same time, so that the technical problem that the monitoring and early warning mode for the habit of eyes cannot monitor and early warn for a plurality of people at the same time is solved.
Referring to fig. 3, fig. 3 is a flow chart of a second embodiment of the eye habit early warning method according to the present invention, and based on the first embodiment shown in fig. 2, the second embodiment of the eye habit early warning method according to the present invention is proposed.
In a second embodiment, after the image capturing is performed on the preset area according to the preset period to obtain the image to be analyzed, the eye habit early warning method further includes:
and reading a preset user group from a first preset user information database, and determining each preset user in the preset user group.
In a specific implementation, if the system is suitable for the classroom environment to perform early warning of eye habit, a class where the user equipment is located will be determined first when early warning is performed considering that a classroom of each class will be provided with a user equipment. For example, if the user equipment is installed in class a, the preset user group is class a students, and the preset users in the preset user group are students in class a, for example, student A1, student A2, student A3, and the like. Similarly, if the user group is installed in class B, the user group is preset as class B students.
Further, after the step S20, the eye habit pre-warning method further includes:
step S301: and determining corresponding continuous eye use time according to the sitting posture changing moment in the sitting posture information of the user.
It can be understood that the bad eye habit can refer to the sitting posture of the user, and further includes the continuous eye duration of the user, and the continuous eye duration is overlong, so that the bad state such as visual fatigue is easy to be brought into, and further, the eyesight problem such as myopia is caused.
In a specific implementation, since the user sitting posture information of the preset period is obtained, the duration of the continuous user of the user can be determined from the sitting postures of the user continuously changing. For example, if the user keeps a certain posture unchanged, the continuous timing is performed, and if the user sitting posture information is changed during 40 minutes of the continuous timing, the continuous eye use time period can be 40 minutes, and the sitting posture changing time is 40 minutes.
Step S302: and comparing the continuous eye duration with a preset eye duration range.
It should be understood that the preset eye duration range that is considered to be good eye habit may be set to 0.ltoreq.x.ltoreq.30 minutes, x being the preset eye duration range.
Step S303: and when the continuous eye use duration is not in the preset eye use duration range, generating second eye use early warning information so as to perform early warning of bad eye use habit through the second eye use early warning information.
It can be understood that obviously, 40 minutes does not fall within the range, the second eye use early warning information can be generated to prompt the web page reader at the web page end that the duration of continuous eye use of the preset user to which the user gesture information belongs is too long, and the adjustment of the eye use habit is suggested.
Of course, when the continuous eye duration is within the preset eye duration range, the process may be ended.
Further, after the step S303, the eye habit pre-warning method further includes:
step S50: and reading the user personal information of the preset user from the second preset user information database.
It will be appreciated that for ease of use by schools and related departments, eye statistics reports may be generated for use. The eye statistics report records the eye use condition of each student and the overall eye use condition assessment.
In a specific implementation, the user personal information of each student may be read from the second preset user information database, where the user personal information includes name, age, class information, and the like, and the second preset user information database may be established in units of classes.
It should be understood that step S50 is performed after said step S303, and also after step S40.
Step S60: generating an eye statistics report according to the personal information of the user, the sitting posture information of the user, a matching result of the sitting posture information of the user and the preset sitting posture, the continuous eye duration and a comparison result of the continuous eye duration and the preset eye duration range.
It can be understood that the eye statistics report includes user personal information, user sitting position information and continuous eye use time length of students in the class, and meanwhile, matching results of the students in the class about sitting positions and comparison results of the students in the class about eye use time length can be recorded.
In a specific implementation, the eye statistics report may further record a specific eye situation of a single student, for example, the first target matching number of successful matching results in the first preset time period may be counted, the first user sitting posture accuracy is generated according to the first target matching number, and the first user sitting posture accuracy is added into the eye statistics report. Specifically, a certain preset user can be selected, the number of the sitting postures of the preset user in one day is counted, if the number is 400 and the total matching times for matching is 1000, the generated sitting posture accuracy of the first user is 0.4, and the sitting posture condition of the user can be described by 0.4.
It should be understood that the eye statistics report may also record the overall eye condition of students in the class, for example, the second target matching number of the preset user group, where the matching result is successful in matching, may be counted in a preset period, and the second user sitting posture accuracy rate is generated according to the second target matching number, and is added to the eye statistics report. Specifically, a preset user group, for example, class a, can be selected, and the number of sitting postures of all students in class a, which are considered to be correct in one day, is counted, if the number is 7000 and the total matching frequency for matching is 10000, the generated second user sitting posture accuracy is 0.7,0.7, and the overall sitting posture condition of class a can be described.
In addition, the first preset user information database, the second preset user information database and the third preset user information database mentioned later are only names, and the three user information databases are not limited to be necessarily different user information databases, and can be the same user information database.
In this embodiment, consideration for the continuous eye duration is also introduced, so that the eye habit of the user is more comprehensively estimated. Meanwhile, the sitting posture information, the continuous eye use time and the evaluation of the two indexes can be recorded into an eye use statistics report so as to provide data support for student eye use habits for schools and related departments.
Referring to fig. 4, fig. 4 is a schematic flow chart of a third embodiment of the eye habit pre-warning method according to the present invention, and based on the first embodiment shown in fig. 2, the third embodiment of the eye habit pre-warning method according to the present invention is proposed.
In a third embodiment, the step S20 includes:
step S201: and reading the preset face image of the preset user from a third preset user information database.
It can be understood that in order to confirm the user sitting posture information of the preset user from the photographed image to be analyzed, a face recognition technology and a deep convolutional neural network algorithm can be introduced at the same time, each student in the image to be analyzed is confirmed through the face recognition technology, and the sitting posture of each student is checked through the deep convolutional neural network algorithm.
In a specific implementation, the face image of each student can be stored in a third preset user information database, so as to provide a comparison sample for face recognition.
Step S202: and identifying a preset user corresponding to the preset face image in the image to be analyzed.
It should be understood that if the image to be analyzed is a classroom picture, the preset face image of each student can be used as a comparison sample, and the identity of the student to which the preset face image belongs can be identified from the classroom picture. The preset user can record the name, the school number or the identity card number of the student.
Step S203: and dividing the image to be analyzed according to the preset face image to obtain an attitude image to be analyzed corresponding to the preset user.
It will be appreciated that after determining the identity of each student photographed in the classroom picture, the classroom picture may be divided into small pictures corresponding to each student, such as a posture image of the sitting position photographed into the student A1, a posture image of the sitting position photographed into the student A2, and a posture image of the sitting position photographed into the student A3.
Step S204: inputting the gesture image to be analyzed into a preset training convolutional neural network to obtain user sitting posture information which is output by the preset training convolutional neural network and corresponds to the preset user.
It should be understood that after the gesture image to be analyzed corresponding to each student is segmented, the segmented gesture image to be analyzed may be input into an input layer of a preset training convolutional neural network, so as to finally obtain a recognition result for the sitting posture of the student in the gesture image to be analyzed.
Further, before the image capturing is performed on the preset area according to the preset period to obtain the image to be analyzed, the early warning method for eye habit further includes:
shooting an image of a preset area to obtain a sample image;
identifying a preset face image of a preset user in the sample image;
dividing the sample image according to a preset face image of the preset user to obtain a sample posture image corresponding to the preset user;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
It can be understood that the preset training convolutional neural network is a trained convolutional neural network, and can be directly applied to the step of identifying sitting postures. In order to obtain the trained convolutional neural network, the preset initial convolutional neural network can be trained first.
In a specific implementation, the same area can be photographed in advance, and the respective gesture image of each student can be segmented from the sample image photographed in advance. And then training the preset initial convolutional neural network through the obtained gesture image so that the weighting coefficient in the preset initial convolutional neural network is the optimal weighting coefficient. In the subsequent practical application process, the trained convolutional neural network can be directly used for completing early warning of eye habit.
Further, after the sample image is segmented according to the preset face image of the preset user to obtain a sample posture image corresponding to the preset user, the eye habit early warning method further includes:
acquiring label information;
marking the sample gesture image as a standard gesture image in a preset initial convolutional neural network through the label information;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network, wherein the training comprises the following steps:
Training the preset initial convolutional neural network through the standard gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
It can be appreciated that in order to improve the accuracy of the convolutional neural network after training in identifying the correct sitting posture, a positive sample can be input when the convolutional neural network is trained, and the positive sample is a sample posture image of a preset user acting according to the correct sitting posture. Meanwhile, the sample gesture image can be marked as a positive sample, namely a standard gesture image, in the convolutional neural network through label information. The convolutional neural network is trained through the standard gesture image, so that the convolutional neural network after training achieves the optimal recognition effect.
It should be understood that in order to obtain a sample posture image meeting the sitting posture requirement, each student can first sit on his own seat in the classroom and sit at the correct preset sitting posture, and then a picture under the scene is collected by the camera as a sample posture image meeting the sitting posture requirement.
In this embodiment, the face recognition technology and the convolutional neural network may be combined to complete the operation of sitting gesture recognition, so as to automatically complete the examination of the sitting gesture of the user.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores an early warning program for eye habit, and the early warning program for eye habit realizes the following operations when being executed by a processor:
shooting an image of a preset area according to a preset period to obtain an image to be analyzed;
determining user sitting posture information corresponding to each preset user according to the image to be analyzed, wherein the preset users are in the space range of the preset area;
matching the sitting posture information of the user with a preset sitting posture;
and when the sitting posture information of the user is not matched with the preset sitting posture, generating first eye early warning information so as to early warn bad eye habit through the first eye early warning information.
Further, the pre-warning program for eye habit is executed by the processor to further realize the following operations:
and reading a preset user group from a first preset user information database, and determining each preset user in the preset user group.
Further, the pre-warning program for eye habit is executed by the processor to further realize the following operations:
determining corresponding continuous eye use time according to the sitting posture changing moment in the sitting posture information of the user;
Comparing the continuous eye duration with a preset eye duration range;
and when the continuous eye use duration is not in the preset eye use duration range, generating second eye use early warning information so as to perform early warning of bad eye use habit through the second eye use early warning information.
Further, the pre-warning program for eye habit is executed by the processor to further realize the following operations:
reading user personal information of a preset user from a second preset user information database;
generating an eye statistics report according to the personal information of the user, the sitting posture information of the user, a matching result of the sitting posture information of the user and the preset sitting posture, the continuous eye duration and a comparison result of the continuous eye duration and the preset eye duration range.
Further, the pre-warning program for eye habit is executed by the processor to further realize the following operations:
reading a preset face image of a preset user from a third preset user information database;
identifying a preset user corresponding to the preset face image in the image to be analyzed;
dividing the image to be analyzed according to the preset face image to obtain an attitude image to be analyzed corresponding to the preset user;
Inputting the gesture image to be analyzed into a preset training convolutional neural network to obtain user sitting posture information which is output by the preset training convolutional neural network and corresponds to the preset user.
Further, the pre-warning program for eye habit is executed by the processor to further realize the following operations:
shooting an image of a preset area to obtain a sample image;
identifying a preset face image of a preset user in the sample image;
dividing the sample image according to a preset face image of the preset user to obtain a sample posture image corresponding to the preset user;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
Further, the pre-warning program for eye habit is executed by the processor to further realize the following operations:
acquiring label information;
marking the sample gesture image as a standard gesture image in a preset initial convolutional neural network through the label information;
accordingly, the following operations are also implemented:
Training the preset initial convolutional neural network through the standard gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
In this embodiment, an image is captured in a preset area to obtain an image to be analyzed, user sitting posture information corresponding to each preset user is determined according to the image to be analyzed, the user sitting posture information is matched with the preset sitting posture, and when the user sitting posture information is not matched with the preset sitting posture, first eye early warning information is generated to early warn bad eye habit through the first eye early warning information. Obviously, a plurality of preset users can exist in the preset area shot in the embodiment at the same time, in addition, the embodiment can match sitting postures at the same time so as to give out early warning information, and also can monitor and early warn for a plurality of people at the same time, so that the technical problem that the monitoring and early warning mode for the habit of eyes cannot monitor and early warn for a plurality of people at the same time is solved.
In addition, referring to fig. 5, an embodiment of the present invention further provides an eye habit early warning device, where the eye habit early warning device includes:
The image capturing module 10 is configured to capture an image of a preset area according to a preset period, so as to obtain an image to be analyzed.
It can be understood that, considering that the user can use the wearable device to perform the monitoring and early warning of the eye habit, the user can also install the monitoring device on the desktop to perform the monitoring and early warning of the eye habit, however, all the monitoring and early warning modes can be implemented only for one person at the same time and cannot be implemented for multiple persons at the same time. If the method aims at multiple people at the same time, the method is troublesome to implement and has high cost.
It should be understood that when the image to be analyzed is shot, the embodiment can shoot multiple people at the same time and monitor the eye habit of the multiple people at the same time, so that the effect of monitoring and early warning for the multiple people at the same time is achieved. Moreover, the user does not need to wear any equipment, and the implementation is simpler.
It should be understood that when the applicable scene is a classroom, the preset area is a classroom environment opposite to the camera, and the preset period may be ten seconds, and then the camera may take pictures of the classroom environment every ten seconds, so as to obtain a classroom picture taken by the student, i.e. an image to be analyzed.
Of course, the present embodiment does not limit the captured image to be analyzed to be a picture or a video. If the video is obtained, each frame in the video can be intercepted to obtain a picture.
The sitting posture determining module 20 is configured to determine user sitting posture information corresponding to each preset user according to the image to be analyzed, where the preset user is in a spatial range of the preset area.
It can be understood that since images of a plurality of students have been photographed in a picture of a classroom, the sitting postures of the plurality of students can be recognized and recorded as user sitting posture information of each student. The sitting posture information of the user is recorded with the sitting posture information of the preset user.
The sitting posture matching module 30 is configured to match the sitting posture information of the user with a preset sitting posture.
It should be appreciated that in order to facilitate the identification of a bad sitting posture, which in turn causes bad eye use habits, a preset sitting posture identified as a good sitting posture may be preset, which may be referred to as a common good sitting posture requirement. For example, the preset sitting posture may be set such that the upper body of the user's body stands upright and the user's eyes are one foot from the desk top, etc.
It can be understood that the obtained sitting posture information of the user can be matched with the preset sitting posture, if the upper body of the user body recorded in the sitting posture information of the user is vertical and the eyes of the user are exactly one distance from the desktop, the matching can be considered successful, the sitting posture of the user is good, and early warning is not needed; if the posture is different from the preset sitting posture, the posture is determined to be not matched.
The eye use early warning module 40 is configured to generate first eye use early warning information when the sitting posture information of the user is not matched with the preset sitting posture, so as to perform early warning of bad eye use habit through the first eye use early warning information.
It can be appreciated that when there is no match, first eye warning information can be generated, which can be recorded with prompt information for students with incorrect sitting postures. The first eye early warning information can be displayed on a preset webpage for students or teachers to read, decision data support can be provided for schools and related departments, and early warning prompt is further carried out on students with poor eye habit.
In this embodiment, an image is captured in a preset area to obtain an image to be analyzed, user sitting posture information corresponding to each preset user is determined according to the image to be analyzed, the user sitting posture information is matched with the preset sitting posture, and when the user sitting posture information is not matched with the preset sitting posture, first eye early warning information is generated to early warn bad eye habit through the first eye early warning information. Obviously, a plurality of preset users can exist in the preset area shot in the embodiment at the same time, in addition, the embodiment can match sitting postures at the same time so as to give out early warning information, and also can monitor and early warn for a plurality of people at the same time, so that the technical problem that the monitoring and early warning mode for the habit of eyes cannot monitor and early warn for a plurality of people at the same time is solved.
In an embodiment, the eye habit early warning device further includes:
the group confirmation module is used for reading the preset user group from the first preset user information database and determining all preset users in the preset user group.
In an embodiment, the eye habit early warning device further includes:
the time length early warning module is used for determining corresponding continuous eye use time length according to the sitting posture changing moment in the sitting posture information of the user; comparing the continuous eye duration with a preset eye duration range; and when the continuous eye use duration is not in the preset eye use duration range, generating second eye use early warning information so as to perform early warning of bad eye use habit through the second eye use early warning information.
In an embodiment, the eye habit early warning device further includes:
the report generation module is used for reading the user personal information of the preset user from the second preset user information database; generating an eye statistics report according to the personal information of the user, the sitting posture information of the user, a matching result of the sitting posture information of the user and the preset sitting posture, the continuous eye duration and a comparison result of the continuous eye duration and the preset eye duration range.
In one embodiment, the sitting posture determining module 20 is further configured to read a preset face image of the preset user from a third preset user information database; identifying a preset user corresponding to the preset face image in the image to be analyzed; dividing the image to be analyzed according to the preset face image to obtain an attitude image to be analyzed corresponding to the preset user; inputting the gesture image to be analyzed into a preset training convolutional neural network to obtain user sitting posture information which is output by the preset training convolutional neural network and corresponds to the preset user.
In an embodiment, the eye habit early warning device further includes:
the neural network training module is used for shooting images of a preset area to obtain sample images; identifying a preset face image of a preset user in the sample image; dividing the sample image according to a preset face image of the preset user to obtain a sample posture image corresponding to the preset user; training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
In an embodiment, the eye habit early warning device further includes:
the standard image module is used for acquiring label information; marking the sample gesture image as a standard gesture image in a preset initial convolutional neural network through the label information;
the neural network training module is further configured to train the preset initial convolutional neural network through the standard pose image, so as to obtain a trained preset initial convolutional neural network, and identify the trained preset initial convolutional neural network as a preset training convolutional neural network.
Other embodiments or specific implementation manners of the eye habit early warning device of the present invention may refer to the above method embodiments, and are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. do not denote any order, but rather the terms first, second, third, etc. are used to interpret the terms as names.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. The eye habit early warning method is characterized by comprising the following steps of:
shooting an image of a preset area according to a preset period to obtain an image to be analyzed;
determining user sitting posture information corresponding to each preset user according to the image to be analyzed, wherein the preset users are in the space range of the preset area;
matching the sitting posture information of the user with a preset sitting posture;
when the sitting posture information of the user is not matched with the preset sitting posture, generating first eye early warning information so as to early warn bad eye habit through the first eye early warning information;
the determining the user sitting posture information corresponding to each preset user according to the image to be analyzed comprises the following steps:
reading a preset face image of a preset user from a third preset user information database;
identifying a preset user corresponding to the preset face image in the image to be analyzed;
dividing the image to be analyzed according to the preset face image to obtain an attitude image to be analyzed corresponding to the preset user;
inputting the gesture image to be analyzed into a preset training convolutional neural network to obtain user sitting posture information which is output by the preset training convolutional neural network and corresponds to the preset user.
2. The eye habit pre-warning method according to claim 1, wherein after capturing images of a preset area according to a preset period to obtain an image to be analyzed, the eye habit pre-warning method further comprises:
and reading a preset user group from a first preset user information database, and determining each preset user in the preset user group.
3. The method for pre-warning eye habit according to claim 1, wherein after determining the sitting posture information of the user corresponding to each preset user according to the image to be analyzed, the method for pre-warning eye habit further comprises:
determining corresponding continuous eye use time according to the sitting posture changing moment in the sitting posture information of the user;
comparing the continuous eye duration with a preset eye duration range;
and when the continuous eye use duration is not in the preset eye use duration range, generating second eye use early warning information so as to perform early warning of bad eye use habit through the second eye use early warning information.
4. The method for early warning of eye habit according to claim 3, wherein when the continuous eye duration is not within the preset eye duration range, generating second eye early warning information, so that after the early warning of bad eye habit is performed by the second eye early warning information, the method for early warning of eye habit further comprises:
Reading user personal information of a preset user from a second preset user information database;
generating an eye statistics report according to the personal information of the user, the sitting posture information of the user, a matching result of the sitting posture information of the user and the preset sitting posture, the continuous eye duration and a comparison result of the continuous eye duration and the preset eye duration range.
5. The eye habit pre-warning method according to claim 1, wherein before the image capturing is performed on the preset area according to the preset period to obtain the image to be analyzed, the eye habit pre-warning method further comprises:
shooting an image of a preset area to obtain a sample image;
identifying a preset face image of a preset user in the sample image;
dividing the sample image according to a preset face image of the preset user to obtain a sample posture image corresponding to the preset user;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
6. The eye habit pre-warning method according to claim 5, wherein after the sample image is segmented according to the preset face image of the preset user to obtain a sample pose image corresponding to the preset user, the eye habit pre-warning method further comprises:
acquiring label information;
marking the sample gesture image as a standard gesture image in a preset initial convolutional neural network through the label information;
training the preset initial convolutional neural network through the sample gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network, wherein the training comprises the following steps:
training the preset initial convolutional neural network through the standard gesture image to obtain a trained preset initial convolutional neural network, and recognizing the trained preset initial convolutional neural network as a preset training convolutional neural network.
7. A user device, the user device comprising: a camera, a memory, a processor and an eye habit pre-warning program stored on the memory and capable of running on the processor, wherein the eye habit pre-warning program, when executed by the processor, realizes the steps of the eye habit pre-warning method according to any one of claims 1 to 6.
8. A storage medium, wherein an eye habit pre-warning program is stored on the storage medium, and the eye habit pre-warning program, when executed by a processor, implements the steps of the eye habit pre-warning method according to any one of claims 1 to 6.
9. The utility model provides an early warning device of eye habit which characterized in that, eye habit's early warning device includes:
the image shooting module is used for shooting images of a preset area according to a preset period to obtain an image to be analyzed;
the sitting posture determining module is used for determining user sitting posture information corresponding to each preset user according to the image to be analyzed, and the preset users are in the space range of the preset area;
the sitting posture matching module is used for matching the sitting posture information of the user with a preset sitting posture;
the eye use early warning module is used for generating first eye early warning information when the sitting posture information of the user is not matched with the preset sitting posture so as to early warn bad eye use habit through the first eye early warning information;
wherein, the sitting posture determining module is further used for:
reading a preset face image of a preset user from a third preset user information database;
Identifying a preset user corresponding to the preset face image in the image to be analyzed;
dividing the image to be analyzed according to the preset face image to obtain an attitude image to be analyzed corresponding to the preset user;
inputting the gesture image to be analyzed into a preset training convolutional neural network to obtain user sitting posture information which is output by the preset training convolutional neural network and corresponds to the preset user.
CN201811588201.2A 2018-12-21 2018-12-21 Eye habit early warning method, user equipment, storage medium and device Active CN109685007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811588201.2A CN109685007B (en) 2018-12-21 2018-12-21 Eye habit early warning method, user equipment, storage medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811588201.2A CN109685007B (en) 2018-12-21 2018-12-21 Eye habit early warning method, user equipment, storage medium and device

Publications (2)

Publication Number Publication Date
CN109685007A CN109685007A (en) 2019-04-26
CN109685007B true CN109685007B (en) 2023-09-05

Family

ID=66189223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811588201.2A Active CN109685007B (en) 2018-12-21 2018-12-21 Eye habit early warning method, user equipment, storage medium and device

Country Status (1)

Country Link
CN (1) CN109685007B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111265220A (en) * 2020-01-21 2020-06-12 王力安防科技股份有限公司 Myopia early warning method, device and equipment
CN113761989B (en) * 2020-06-05 2023-04-07 腾讯科技(深圳)有限公司 Behavior recognition method and device, computer and readable storage medium
CN114664067A (en) * 2022-02-25 2022-06-24 北京百度网讯科技有限公司 Method and device for outputting prompt, electronic equipment and storage medium
CN115547497B (en) * 2022-10-09 2023-09-08 湖南火眼医疗科技有限公司 Myopia prevention and control system and method based on multi-source data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014053794A (en) * 2012-09-07 2014-03-20 Nintendo Co Ltd Information processing program, information processing apparatus, information processing system, and information processing method
CN105139447B (en) * 2015-08-07 2018-09-11 天津中科智能技术研究院有限公司 Sitting posture real-time detection method based on dual camera
CN106910323A (en) * 2017-03-20 2017-06-30 广东小天才科技有限公司 A kind of sitting posture detecting method and device based on eye-protecting lamp
CN108416988A (en) * 2017-04-19 2018-08-17 陈其高 A kind of sitting posture prompting method and device
CN108601133A (en) * 2018-02-12 2018-09-28 甄十信息科技(上海)有限公司 A kind of intelligent desk lamp and the sitting posture correction function method based on intelligent desk lamp

Also Published As

Publication number Publication date
CN109685007A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109685007B (en) Eye habit early warning method, user equipment, storage medium and device
CN107609517B (en) Classroom behavior detection system based on computer vision
Li et al. Massive open online proctor: Protecting the credibility of MOOCs certificates
KR101581921B1 (en) Method and apparatus for learning cunsulting
JP4631014B2 (en) Electronic teaching material learning support device, electronic teaching material learning support system, electronic teaching material learning support method, and electronic learning support program
CN112866808B (en) Video processing method and device, electronic equipment and storage medium
US20210406526A1 (en) Information processing device, information processing method, and program
CN111339801B (en) Personnel attention detection method, device, equipment and system
CN112613780B (en) Method and device for generating learning report, electronic equipment and storage medium
CN111275345A (en) Classroom informatization evaluation and management system and method based on deep learning
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
KR20210012503A (en) Online Interview Providing Method, System and Computer-readable Medium
CN111160277A (en) Behavior recognition analysis method and system, and computer-readable storage medium
WO2023041940A1 (en) Gaze-based behavioural monitoring system
KR102245319B1 (en) System for analysis a concentration of learner
Seneviratne et al. Student and lecturer performance enhancement system using artificial intelligence
Celiktutan et al. Inferring human knowledgeability from eye gaze in mobile learning environments
Ward et al. Protocols for the investigation of information processing in human assessment of fundamental movement skills
JP6819194B2 (en) Information processing systems, information processing equipment and programs
CN115690867A (en) Classroom concentration detection method, device, equipment and storage medium
CN114998440A (en) Multi-mode-based evaluation method, device, medium and equipment
JP6980883B1 (en) Assist system, assist method, and assist program
CN111507555B (en) Human body state detection method, classroom teaching quality evaluation method and related device
JP6672592B2 (en) Presenter selection support program, presenter selection support method, and information processing device
CN110120259B (en) Intelligent consultation system and application method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant