CN111178273A - Education method and device based on emotion change - Google Patents
Education method and device based on emotion change Download PDFInfo
- Publication number
- CN111178273A CN111178273A CN201911398665.1A CN201911398665A CN111178273A CN 111178273 A CN111178273 A CN 111178273A CN 201911398665 A CN201911398665 A CN 201911398665A CN 111178273 A CN111178273 A CN 111178273A
- Authority
- CN
- China
- Prior art keywords
- emotion
- education
- preset
- rule
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 209
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000008859 change Effects 0.000 title claims abstract description 21
- 238000001514 detection method Methods 0.000 claims abstract description 58
- 230000001815 facial effect Effects 0.000 claims abstract description 29
- 238000007781 pre-processing Methods 0.000 claims description 11
- 230000007510 mood change Effects 0.000 claims description 10
- 206010027940 Mood altered Diseases 0.000 claims description 6
- 210000004709 eyebrow Anatomy 0.000 claims description 6
- 210000001061 forehead Anatomy 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 6
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims description 2
- 230000036651 mood Effects 0.000 abstract description 8
- 230000008569 process Effects 0.000 description 12
- 230000002996 emotional effect Effects 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 230000007547 defect Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 230000006397 emotional response Effects 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Educational Technology (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an education method and device based on emotion change, which comprises the following steps: when a first user starts learning, a first face image is collected by a camera; determining a first emotion based on the first facial image; executing a first educational rule for a first emotion; detecting whether the first emotion changes or not, and outputting a detection result; and adjusting the first education rule according to the detection result. Gather first user's first face image through the camera and confirm that first mood further comes to carry out the first education rule that corresponds with first mood, can change to the mood of first user and come the pertinence to educate for the teaching quality can keep high-efficient, solved among the prior art because the mr can't effectual catch each learning mood of student and then can't give effective guide to the emotion feedback of child on-line learning in-process, lead to teaching quality on the low side, the student can't all absorb the knowledge of learning, the problem of inefficiency.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an education method and device based on emotion change.
Background
At present, online teaching becomes a mainstream of modern education, the main form of online teaching is that a teacher and a student simultaneously carry out video teaching at respective computer terminals, compared with the traditional online teaching, the teaching mode has no strict requirement on the teaching place, and the teacher and the student can understand depth more by one-to-one video teaching, but the existing online education system has the following defects: in the process of online education, the process of a knowledge point exercise link is mechanical and low in efficiency, a teacher cannot effectively capture each learning emotion of students and cannot give effective guidance aiming at emotion feedback in the online learning process of children, so that the teaching quality is low, and the students cannot absorb learned knowledge completely.
Disclosure of Invention
Aiming at the displayed problems, the method determines the emotion of the student at the time based on the facial image of the student, executes the education rule corresponding to the emotion according to the emotion at the time, determines whether the emotion of the student changes in real time in the teaching process, and selects whether to adjust the education rule according to the determination result to realize online education.
A method of education based on mood changes comprising the steps of:
when a first user starts learning, a first face image is collected by a camera;
determining a first emotion based on the first facial image;
executing a first educational rule for the first emotion;
detecting whether the first emotion changes or not, and outputting a detection result;
and adjusting the first education rule according to the detection result.
Preferably, before the first user starts learning and the first face image is acquired by using the camera, the method further includes:
acquiring a second facial image of n emotions of a second user in advance;
preprocessing the second face image;
dividing the second face image into five regions, the five regions including: a forehead region, an eyebrow region, a nose bridge region, a cheek region, and a chin region;
counting preset parameters of five regions in the second face image, and training by combining the preset parameters with the second face image to obtain preset emotion indexes of n emotions;
and establishing a database to store the n emotions, and preset emotion indexes and preset parameters corresponding to each emotion.
Preferably, the determining a first emotion based on the first facial image includes:
acquiring current parameters of the first facial image;
obtaining a current emotion index based on the current parameter;
inputting the current emotion index into the database to be searched to obtain the first emotion, wherein the first emotion is any one of the n emotions.
Preferably, the method further comprises:
executing a preset education rule before the camera is used for collecting the first face image;
acquiring m preset coefficients of the preset education rule, and adjusting the m preset coefficients according to each emotion in the n emotions in the database;
saving the adjusted preset education rules;
whether the first emotion changes or not is detected, and a detection result is output, wherein the detection result comprises the following steps:
acquiring a third face image of the first user in real time by using the camera;
confirming a second emotion corresponding to the third face image;
comparing whether the first emotion and the second emotion are the same;
if the first emotion is the same as the second emotion, the detection result is that no change occurs;
and if the first emotion and the second emotion are different, the detection result is that the change occurs.
Preferably, the adjusting the first education rule according to the detection result includes:
if the detection result is that the first education rule is not changed, continuing to execute the first education rule;
and if the detection result is that the first education rule is changed, adjusting the first education rule to a second education rule corresponding to the second emotion, wherein the second education rule is any one preset education rule except the first education rule in the adjusted preset education rules.
An educational apparatus based on mood changes, the apparatus comprising:
the first acquisition module is used for acquiring a first facial image by using a camera when a first user starts learning;
a determination module to determine a first emotion based on the first facial image;
an execution module to execute a first educational rule for the first emotion;
the detection module is used for detecting whether the first emotion changes or not and outputting a detection result;
and the adjusting module is used for adjusting the first education rule according to the detection result.
Preferably, the apparatus further comprises:
the second acquisition module is used for acquiring a second facial image of n emotions of a second user in advance;
the preprocessing module is used for preprocessing the second face image;
a partitioning module to partition the second face image into five regions, the five regions including: a forehead region, an eyebrow region, a nose bridge region, a cheek region, and a chin region;
the training module is used for counting preset parameters of five regions in the second face image, and combining the preset parameters with the second face image for training to obtain preset emotion indexes of n emotions;
and the first storage module is used for establishing a database to store the n emotions, and preset emotion indexes and preset parameters corresponding to each emotion.
Preferably, the determining module includes:
the acquisition submodule is used for acquiring the current parameters of the first facial image;
an obtaining submodule for obtaining a current emotion index based on the current parameter;
and the retrieval submodule is used for inputting the current emotion index into the database for retrieval to obtain the first emotion, wherein the first emotion is any one of the n emotions.
Preferably, the apparatus further comprises:
the second execution module is used for executing preset education rules before the camera is used for acquiring the first face image;
the acquisition module is used for acquiring m preset coefficients of the preset education rule and adjusting the m preset coefficients according to each emotion in the n emotions in the database;
the second storage module is used for storing the adjusted preset education rules;
the detection module comprises:
the acquisition submodule is used for acquiring a third facial image of the first user in real time by using the camera;
the confirming submodule is used for confirming a second emotion corresponding to the third face image;
a comparison sub-module that compares whether the first emotion and the second emotion are the same;
and the determining submodule is used for determining that the detection result is unchanged if the comparison result of the comparing submodule is that the first emotion is the same as the second emotion, and determining that the detection result is changed if the comparison result of the comparing submodule is that the first emotion is different from the second emotion.
Preferably, the adjusting module includes:
the execution submodule is used for continuously executing the first education rule if the detection result of the detection module is unchanged;
and the conversion sub-module is used for adjusting the first education rule into a second education rule corresponding to the second emotion if the detection result of the detection module is changed, wherein the second education rule is any one preset education rule except the first education rule in the adjusted preset education rules.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention without limiting the invention in which:
fig. 1 is a flowchart of the work of an education method based on emotional changes according to the present invention;
FIG. 2 is another work flow diagram of an education method based on emotional changes according to the present invention
FIG. 3 is a preset education rule adjustment screenshot of an education method based on emotional changes according to the present invention;
fig. 4 is a structural view of an education apparatus based on emotional change according to the present invention;
fig. 5 is another structural view of an education apparatus based on emotional change according to the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
At present, online teaching becomes a mainstream of modern education, the main form of online teaching is that a teacher and a student simultaneously carry out video teaching at respective computer terminals, compared with the traditional online teaching, the teaching mode has no strict requirement on the teaching place, and the teacher and the student can understand depth more by one-to-one video teaching, but the existing online education system has the following defects: in the process of online education, the knowledge point exercise link process is mechanical, a teacher cannot effectively capture each learning emotion of students and then cannot give effective guidance aiming at emotion feedback in the online learning process of children, so that the teaching quality is low, the students cannot absorb learned knowledge completely, and the efficiency is low. In order to solve the above problems, the present embodiment discloses a method for determining a current emotion based on collecting facial images of a student, executing an education rule corresponding to the emotion according to the current emotion, determining whether the emotion of the student changes in real time in a teaching process, and selecting whether to adjust the education rule according to a determination result.
An educational method based on emotional changes, as shown in fig. 1, comprising the steps of:
step S101, when a first user starts learning, a first face image is collected by a camera;
step S102, determining a first emotion based on the first face image;
step S103, executing a first education rule aiming at the first emotion;
s104, detecting whether the first emotion changes or not, and outputting a detection result;
and S105, adjusting the first education rule according to the detection result.
The working principle of the technical scheme is as follows: when a user starts to learn, a first face image is collected by a camera, a first emotion is determined based on the first face image, a first education rule corresponding to the first emotion is executed according to the determined first emotion, whether the first emotion of the user changes or not is detected in real time in the executing process, if the first emotion of the user does not change, the second education rule is continuously executed, and if the first emotion of the user changes, the first education rule is adjusted according to the changed emotion.
The beneficial effects of the above technical scheme are: gather first user's first face image through the camera and confirm that first mood further comes to carry out the first education rule that corresponds with first mood, can change to the mood of first user and come the pertinence to educate for the teaching quality can keep high-efficient, solved among the prior art because the mr can't effectual catch each learning mood of student and then can't give effective guide to the emotion feedback of child on-line learning in-process, lead to teaching quality on the low side, the student can't all absorb the knowledge of learning, the problem of inefficiency.
In one embodiment, before capturing the first facial image with the camera when the first user starts learning, the method further comprises:
acquiring a second facial image of n emotions of a second user in advance;
preprocessing the second face image;
dividing the second face image into five regions, the five regions including: a forehead region, an eyebrow region, a nose bridge region, a cheek region, and a chin region;
counting preset parameters of five regions in the second face image, and training by combining the preset parameters with the second face image to obtain preset emotion indexes of n emotions;
establishing a database to store n emotions, and preset emotion indexes and preset parameters corresponding to each emotion;
in an embodiment, n is a positive integer greater than or equal to 7, and the n emotions in this embodiment may be: neutral, happy, surprised, disgust, too much, angry, fear. The preset parameters of the five regions of the seven emotions are calculated according to the seven emotions, the preset emotion indexes of the seven regions are obtained according to the preset parameters, and then the seven emotions, the corresponding preset parameters and the preset emotion indexes are stored together. The preprocessing may be pixel optimization of the face image, resolution enhancement, and the like.
The beneficial effects of the above technical scheme are: the model of a comparison is obtained by training preset emotion indexes of the emotion in n in advance, so that the acquired first face image is input into the database and the first emotion corresponding to the first face image can be directly retrieved, compared with the prior art that the image to be recognized needs to be taken to be compared with the image stored in the database, the comparison time is greatly shortened, the recognition degree of the image after preprocessing is higher than that in the prior art, and the recognition accuracy is more efficient.
In one embodiment, as shown in fig. 2, determining a first emotion based on a first facial image includes:
step S201, acquiring current parameters of a first face image;
step S202, obtaining a current emotion index based on the current parameter;
step S203, inputting the current emotion index into a database for retrieval to obtain a first emotion, wherein the first emotion is any one of n emotions.
The beneficial effects of the above technical scheme are: when the emotion change of the appearance of the first user is not obvious, the current emotion index is obtained by using the current parameter, then the first emotion is obtained by using the emotion index, the education rule is adjusted according to the emotion change in time, the problem that in the prior art, each learning emotion of a student cannot be effectively captured, and then effective guidance cannot be given to emotion feedback in the online learning process of the child is further avoided, and the teaching quality is improved.
In one embodiment, the method further comprises:
executing a preset education rule before acquiring a first face image by using a camera;
acquiring m preset coefficients of a preset education rule, and adjusting the m preset coefficients according to each emotion in n emotions in a database;
saving the adjusted preset education rules;
detecting whether the first emotion changes or not, and outputting a detection result, wherein the detection result comprises:
acquiring a third face image of the first user in real time by using a camera;
confirming a second emotion corresponding to the third face image;
comparing whether the first emotion and the second emotion are the same;
if the first emotion is the same as the second emotion, the detection result is that no change occurs;
if the first emotion and the second emotion are different, the detection result is that the change occurs;
in this embodiment, the value of m may be 5, m preset coefficients, that is, a difficulty coefficient, an important coefficient, an interest coefficient, a knowledge point coefficient, and a prediction guidance coefficient, the five coefficients are respectively adjusted for different emotions, before the camera acquires the first face image, a preset education rule corresponding to the neutral state is executed by default, and if the second emotion and the first emotion of the first user are found to be different in the learning process, the education rule is timely switched according to the second emotion.
The beneficial effects of the above technical scheme are: whether the emotion of the first user changes or not is determined by timely collecting the third face image of the first user, whether the emotion of the first user changes or not does not need to be artificially guessed, and meanwhile, the education rules corresponding to the n emotions are adjusted in advance and are conveniently stored, so that the education rules corresponding to the changed emotion are directly switched when the emotion of the first user changes, time is saved, and efficiency is improved.
In one embodiment, adjusting the first educational rule based on the detection result comprises:
if the detection result is that the first education rule is not changed, continuing to execute the first education rule;
and if the detection result is that the change occurs, adjusting the first education rule into a second education rule corresponding to the second emotion, wherein the second education rule is any one of the adjusted preset education rules except the first education rule.
The technical scheme has the advantages that the key points of the teaching contents are changed along with the change of the education rules by adjusting the education rules in time, and meanwhile, the first user can stably receive education knowledge.
In one embodiment, as shown in FIG. 3, includes:
step 1: after a user enters a knowledge point exercise link, simultaneously opening a camera to capture the emotion of the user;
step 2: when the emotion analysis system detects that the emotion of the user changes, the emotion after the change is transmitted to the decision module;
and 3, step 3: the decision module makes decision coping strategies aiming at different emotions;
and 4, step 4: the decision module returns the corresponding strategy to the learning system, and the learning system provides corresponding adjustment according to the guidance of the strategy.
The beneficial effects of the above technical scheme are: the difference between online education and non-online education is whether timely and targeted guidance can be given according to the reflection of children in the learning process. The invention is introduced based on a decision module of an emotion recognition system, can help a program to adjust an exercise strategy according to the in-situ emotional response of children, and provides timely and targeted auxiliary guidance.
This embodiment also discloses an educational apparatus based on emotion change, as shown in fig. 4, the apparatus comprising:
a first collecting module 401, configured to collect a first facial image with a camera when a first user starts learning;
a determining module 402 for determining a first emotion based on the first facial image;
an execution module 403 for executing a first educational rule for a first emotion;
a detecting module 404, configured to detect whether the first emotion changes, and output a detection result;
an adjusting module 405, configured to adjust the first education rule according to the detection result.
In one embodiment, the above apparatus further comprises:
the second acquisition module is used for acquiring a second facial image of n emotions of a second user in advance;
the preprocessing module is used for preprocessing the second face image;
a partitioning module for partitioning the second face image into five regions, the five regions comprising: a forehead region, an eyebrow region, a nose bridge region, a cheek region, and a chin region;
the training module is used for counting preset parameters of five regions in the second face image, and combining the preset parameters with the second face image for training to obtain preset emotion indexes of n emotions;
and the first storage module is used for establishing a database to store the n emotions, the preset emotion indexes corresponding to the n emotions and the preset parameters.
In one embodiment, as shown in FIG. 5, the determining module includes:
the obtaining sub-module 4021 is used for obtaining the current parameters of the first facial image;
an obtaining sub-module 4022, configured to obtain a current emotion index based on the current parameter;
the retrieval sub-module 4023 is configured to input the current emotion indicator into the database for retrieval to obtain a first emotion, where the first emotion is any one of the n emotions.
In one embodiment, the above apparatus further comprises:
the second execution module is used for executing the preset education rules before the camera is used for acquiring the first face image;
the acquisition module is used for acquiring m preset coefficients of preset education rules and adjusting the m preset coefficients according to each emotion in the n emotions in the database;
the second storage module is used for storing the adjusted preset education rules;
a detection module comprising:
the acquisition submodule is used for acquiring a third facial image of the first user in real time by using the camera;
the confirming submodule is used for confirming a second emotion corresponding to the third face image;
a comparison sub-module that compares whether the first emotion and the second emotion are the same;
and the determining submodule is used for determining that the detection result is unchanged if the comparison result of the comparing submodule is that the first emotion is the same as the second emotion, and determining that the detection result is changed if the comparison result of the comparing submodule is that the first emotion is different from the second emotion.
In one embodiment, the adjustment module includes:
the execution submodule is used for continuously executing the first education rule if the detection result of the detection module is unchanged;
and the conversion submodule is used for adjusting the first education rule into a second education rule corresponding to the second emotion if the detection result of the detection module is changed, and the second education rule is any one of the adjusted preset education rules except the first education rule.
It will be understood by those skilled in the art that the first and second terms of the present invention refer to different stages of application.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A method of education based on mood changes comprising the steps of:
when a first user starts learning, a first face image is collected by a camera;
determining a first emotion based on the first facial image;
executing a first educational rule for the first emotion;
detecting whether the first emotion changes or not, and outputting a detection result;
and adjusting the first education rule according to the detection result.
2. The emotion-change based education method as claimed in claim 1, wherein, before the first face image is captured by the camera when the first user starts learning, the method further comprises:
acquiring a second facial image of n emotions of a second user in advance;
preprocessing the second face image;
dividing the second face image into five regions, the five regions including: a forehead region, an eyebrow region, a nose bridge region, a cheek region, and a chin region;
counting preset parameters of five regions in the second face image, and training by combining the preset parameters with the second face image to obtain preset emotion indexes of n emotions;
and establishing a database to store the n emotions, and preset emotion indexes and preset parameters corresponding to each emotion.
3. The method of claim 2, wherein determining a first emotion based on the first facial image comprises:
acquiring current parameters of the first facial image;
obtaining a current emotion index based on the current parameter;
inputting the current emotion index into the database to be searched to obtain the first emotion, wherein the first emotion is any one of the n emotions.
4. The method of mood change based education as recited in claim 3, further comprising:
executing a preset education rule before the camera is used for collecting the first face image;
acquiring m preset coefficients of the preset education rule, and adjusting the m preset coefficients according to each emotion in the n emotions in the database;
saving the adjusted preset education rules;
whether the first emotion changes or not is detected, and a detection result is output, wherein the detection result comprises the following steps:
acquiring a third face image of the first user in real time by using the camera;
confirming a second emotion corresponding to the third face image;
comparing whether the first emotion and the second emotion are the same;
if the first emotion is the same as the second emotion, the detection result is that no change occurs;
and if the first emotion and the second emotion are different, the detection result is that the change occurs.
5. The method of mood change based education as recited in claim 4, wherein the adjusting the first education rule based on the detection result includes:
if the detection result is that the first education rule is not changed, continuing to execute the first education rule;
and if the detection result is that the first education rule is changed, adjusting the first education rule to a second education rule corresponding to the second emotion, wherein the second education rule is any one preset education rule except the first education rule in the adjusted preset education rules.
6. An educational apparatus based on mood changes, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a first facial image by using a camera when a first user starts learning;
a determination module to determine a first emotion based on the first facial image;
an execution module to execute a first educational rule for the first emotion;
the detection module is used for detecting whether the first emotion changes or not and outputting a detection result;
and the adjusting module is used for adjusting the first education rule according to the detection result.
7. The change in mood-based educational device according to claim 6, wherein the device further comprises:
the second acquisition module is used for acquiring a second facial image of n emotions of a second user in advance;
the preprocessing module is used for preprocessing the second face image;
a partitioning module to partition the second face image into five regions, the five regions including: a forehead region, an eyebrow region, a nose bridge region, a cheek region, and a chin region;
the training module is used for counting preset parameters of five regions in the second face image, and combining the preset parameters with the second face image for training to obtain preset emotion indexes of n emotions;
and the first storage module is used for establishing a database to store the n emotions, and preset emotion indexes and preset parameters corresponding to each emotion.
8. The change in mood-based educational device of claim 7, wherein the determining module comprises:
the acquisition submodule is used for acquiring the current parameters of the first facial image;
an obtaining submodule for obtaining a current emotion index based on the current parameter;
and the retrieval submodule is used for inputting the current emotion index into the database for retrieval to obtain the first emotion, wherein the first emotion is any one of the n emotions.
9. The change in mood-based educational device according to claim 8, wherein the device further comprises:
the second execution module is used for executing preset education rules before the camera is used for acquiring the first face image;
the acquisition module is used for acquiring m preset coefficients of the preset education rule and adjusting the m preset coefficients according to each emotion in the n emotions in the database;
the second storage module is used for storing the adjusted preset education rules;
the detection module comprises:
the acquisition submodule is used for acquiring a third facial image of the first user in real time by using the camera;
the confirming submodule is used for confirming a second emotion corresponding to the third face image;
a comparison sub-module that compares whether the first emotion and the second emotion are the same;
and the determining submodule is used for determining that the detection result is unchanged if the comparison result of the comparing submodule is that the first emotion is the same as the second emotion, and determining that the detection result is changed if the comparison result of the comparing submodule is that the first emotion is different from the second emotion.
10. The change in mood-based educational device of claim 9, wherein the adjustment module comprises:
the execution submodule is used for continuously executing the first education rule if the detection result of the detection module is unchanged;
and the conversion sub-module is used for adjusting the first education rule into a second education rule corresponding to the second emotion if the detection result of the detection module is changed, wherein the second education rule is any one preset education rule except the first education rule in the adjusted preset education rules.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911398665.1A CN111178273A (en) | 2019-12-30 | 2019-12-30 | Education method and device based on emotion change |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911398665.1A CN111178273A (en) | 2019-12-30 | 2019-12-30 | Education method and device based on emotion change |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111178273A true CN111178273A (en) | 2020-05-19 |
Family
ID=70652281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911398665.1A Pending CN111178273A (en) | 2019-12-30 | 2019-12-30 | Education method and device based on emotion change |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111178273A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023693A (en) * | 2016-05-25 | 2016-10-12 | 北京九天翱翔科技有限公司 | Education system and method based on virtual reality technology and pattern recognition technology |
CN107958433A (en) * | 2017-12-11 | 2018-04-24 | 吉林大学 | A kind of online education man-machine interaction method and system based on artificial intelligence |
CN108074203A (en) * | 2016-11-10 | 2018-05-25 | 中国移动通信集团公司 | A kind of teaching readjustment method and apparatus |
CN110059614A (en) * | 2019-04-16 | 2019-07-26 | 广州大学 | A kind of intelligent assistant teaching method and system based on face Emotion identification |
CN110334610A (en) * | 2019-06-14 | 2019-10-15 | 华中师范大学 | A kind of various dimensions classroom based on computer vision quantization system and method |
-
2019
- 2019-12-30 CN CN201911398665.1A patent/CN111178273A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023693A (en) * | 2016-05-25 | 2016-10-12 | 北京九天翱翔科技有限公司 | Education system and method based on virtual reality technology and pattern recognition technology |
CN108074203A (en) * | 2016-11-10 | 2018-05-25 | 中国移动通信集团公司 | A kind of teaching readjustment method and apparatus |
CN107958433A (en) * | 2017-12-11 | 2018-04-24 | 吉林大学 | A kind of online education man-machine interaction method and system based on artificial intelligence |
CN110059614A (en) * | 2019-04-16 | 2019-07-26 | 广州大学 | A kind of intelligent assistant teaching method and system based on face Emotion identification |
CN110334610A (en) * | 2019-06-14 | 2019-10-15 | 华中师范大学 | A kind of various dimensions classroom based on computer vision quantization system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334626B (en) | Online learning system based on emotional state | |
CN111242049B (en) | Face recognition-based student online class learning state evaluation method and system | |
CN106228293A (en) | teaching evaluation method and system | |
CN102136024B (en) | Biometric feature identification performance assessment and diagnosis optimizing system | |
CN110580470A (en) | Monitoring method and device based on face recognition, storage medium and computer equipment | |
WO2021077382A1 (en) | Method and apparatus for determining learning state, and intelligent robot | |
CN111136659B (en) | Mechanical arm action learning method and system based on third person scale imitation learning | |
CN112017085B (en) | Intelligent virtual teacher image personalization method | |
CN106547815B (en) | Big data-based targeted job generation method and system | |
CN114120432A (en) | Online learning attention tracking method based on sight estimation and application thereof | |
CN115205764B (en) | Online learning concentration monitoring method, system and medium based on machine vision | |
CN107945210A (en) | Target tracking algorism based on deep learning and environment self-adaption | |
CN115810163B (en) | Teaching evaluation method and system based on AI classroom behavior recognition | |
CN106897384A (en) | One kind will bring out the theme automatic evaluation method and device | |
CN111428686A (en) | Student interest preference evaluation method, device and system | |
CN106031148A (en) | Imaging device and method for automatic focus in an imaging device as well as a corresponding computer program | |
KR102174345B1 (en) | Method and Apparatus for Measuring Degree of Immersion | |
CN110379234A (en) | A kind of study coach method and device | |
CN107844762A (en) | Information processing method and system | |
CN113705349A (en) | Attention power analysis method and system based on sight estimation neural network | |
CN114581271B (en) | Intelligent processing method and system for online teaching video | |
CN113282840B (en) | Comprehensive training acquisition management platform | |
CN105631410B (en) | A kind of classroom detection method based on intelligent video processing technique | |
CN113989217A (en) | Human eye diopter detection method based on deep learning | |
CN113536893A (en) | Online teaching learning concentration degree identification method, device, system and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200519 |