CN116888606A - Learning device and evaluation information output device - Google Patents

Learning device and evaluation information output device Download PDF

Info

Publication number
CN116888606A
CN116888606A CN202280013277.0A CN202280013277A CN116888606A CN 116888606 A CN116888606 A CN 116888606A CN 202280013277 A CN202280013277 A CN 202280013277A CN 116888606 A CN116888606 A CN 116888606A
Authority
CN
China
Prior art keywords
nodding
information
evaluation information
evaluation
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280013277.0A
Other languages
Chinese (zh)
Inventor
香川早苗
伊藤雄一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daikin Industries Ltd
Original Assignee
Daikin Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2022001614A external-priority patent/JP7260826B2/en
Application filed by Daikin Industries Ltd filed Critical Daikin Industries Ltd
Priority claimed from PCT/JP2022/004684 external-priority patent/WO2022168973A1/en
Publication of CN116888606A publication Critical patent/CN116888606A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

An evaluation information output device (100) for outputting evaluation information relating to a nodding operation based on the nodding operation of one or more persons, the evaluation information output device comprising a first information acquisition unit (101) and an output unit (102). A first information acquisition unit (101) acquires information (11) relating to a nodding operation. The output unit (102) has a learning model (40) that inputs information on the nodding operation and outputs evaluation information on the nodding operation, and uses the learning model (40) to estimate and output the evaluation information (21) on the nodding operation from the information (11) on the nodding operation. The learning model 40 is obtained by learning information on the nodding operation and evaluation information of the nodding operation as a learning data set.

Description

Learning device and evaluation information output device
Technical Field
The present invention relates to a learning device and an evaluation information output device.
Background
Conventionally, productivity of a conference has been estimated using behaviors, speaking amounts, and the like of attendees of the conference obtained by using images (patent document 1 (japanese patent application laid-open No. 2019-211962).
Disclosure of Invention
Problems to be solved by the invention
When estimating the productivity of a collective activity such as a meeting, the evaluation of the behavior and the speaking volume of attendees at the meeting is limited to the evaluation of the volume of the collective activity, and the evaluation of quality is insufficient. In addition, even when the evaluation of the amount of collective activity is small, there is a case where high-quality collective activity is performed, and therefore, quality evaluation is important, but there is a problem that it is difficult to perform quality evaluation of collective activity such as a conference.
Means for solving the problems
The first viewpoint learning device is a learning device for learning a nodding motion of one or more persons, and includes a learning unit. The learning unit generates a learned model in which information on the click operation is input and output as evaluation information of the click operation. The learning unit learns information on the nodding operation and evaluation information of the nodding operation as a learning data set, and generates a learned model.
In this learning device, information on the click action, which is the reaction of the attendees such as a conference, and evaluation information of the click action are learned as a learning data set, whereby a learning model capable of evaluating the quality of a collective activity such as a conference can be generated.
The evaluation information output device of the second aspect outputs evaluation information related to a head-noding operation of one or more persons based on the head-noding operation, and has a first information acquisition unit and an output unit. The first information acquisition unit acquires information on a nodding operation. The output unit has a learned model that receives information on the nodding operation as input and outputs evaluation information of the nodding operation. The output unit uses the learned model to estimate and output evaluation information of the nodding operation based on the information on the nodding operation. The learning model is obtained by learning information on the nodding operation and evaluation information of the nodding operation as a learning data set.
In this evaluation information output apparatus, the quality of collective activities such as conferences can be evaluated by estimating evaluation information of the click operation from information on the click operation, which is the reaction of attendees such as conferences, using a learning model.
The evaluation information output apparatus according to the third aspect further includes a second information acquisition unit in addition to the apparatus according to the second aspect. The second information acquisition unit acquires information on an object that is a cause of the nodding operation. The object that becomes the cause of the nodding motion includes at least one of a speech in a conference, data or objects presented in a presentation, and a performance of a person or animal. The output unit outputs information on the object that is the cause of the nodding operation in association with the evaluation information of the nodding operation.
In this evaluation information output apparatus, information on an object that causes a nodding operation and evaluation information on the nodding operation are output in association with each other, whereby evaluation information on the nodding operation performed on a specific object can be obtained.
The evaluation information output device of the fourth aspect is the device of the second or third aspect, wherein the evaluation information of the click operation is information on the emotion of the person. The information on the emotion of the person includes information on happiness and happiness of the person or information on whether the person recognizes an object that is a cause of the nodding motion. The evaluation information of the nod operation can be obtained by analysis of image data or analysis of sound data of the face of the person.
In this evaluation information output apparatus, the quality of the nodding can be determined by estimating the intention of the nodding operation of the person from the information on the nodding operation.
The evaluation information output device according to the fifth aspect is the device according to the second or third aspect, wherein the evaluation information of the nodding operation is an evaluation of an object that is a cause of the nodding operation. The evaluation of the object that is the cause of the nodding operation is the result of a manual investigation performed on a person.
In this evaluation information output apparatus, the quality of the nodding operation can be determined by obtaining an evaluation of the target that is the cause of the nodding operation in the investigation.
The evaluation information output device of the sixth aspect is the device of any one of the second to fifth aspects, wherein the information on the nodding motion includes at least one of the number, period, and amplitude of nodding motion.
In this evaluation information output apparatus, information on the reactions of attendees such as conferences can be obtained by measuring the number, period, or amplitude of nodding operations.
The evaluation information output apparatus of the seventh aspect is the apparatus of any one of the second to fifth aspects, wherein the information on the nodding operation includes at least one of a time of nodding operation, an intensity of nodding operation, whether or not a plurality of persons have the same time of nodding operation, and a number of persons having the same time of nodding operation.
In this evaluation information output apparatus, information on the reactions of attendees such as conferences can be obtained by measuring the timing of the nodding operation, the intensity of the nodding operation, whether or not there are simultaneous nodding persons among a plurality of persons, or the number of simultaneous nodding persons.
The evaluation information output device according to the eighth aspect is the device according to the third aspect, wherein the information on the nodding operation includes a time from a post-utterance in the conference to occurrence of the nodding operation.
In this evaluation information output apparatus, the quality of the nodding operation can be determined by measuring the time from the occurrence of the nodding operation after the utterance in the conference.
The evaluation information output apparatus according to the ninth aspect is the apparatus according to any one of the second to fifth aspects, wherein the information on the nodding operation includes at least one of a frequency of nodding operation and a frequency of nodding operation per unit time.
In this evaluation information output apparatus, the quality of the nodding operation can be determined by acquiring the frequency of the nodding operation or the frequency of the nodding operation per unit time.
The evaluation information output apparatus according to a tenth aspect is the apparatus according to any one of the second to ninth aspects, wherein the first information acquisition unit acquires information on the nodding operation obtained by recognizing the nodding operation using a seat surface sensor, a motion capture device, an acceleration sensor, a depth camera, or a moving image of a person.
In this evaluation information output apparatus, information on the posture in the conference, that is, the nodding motion can be easily acquired by using a seat surface sensor, a motion catcher, an acceleration sensor, a depth camera, or a moving image of a person.
Drawings
Fig. 1 is a functional block diagram of the learning device 1.
Fig. 2 is a functional block diagram of the evaluation information output apparatus 100.
Fig. 3 is a diagram showing an example of learning data.
Fig. 4 is a diagram for explaining the learning process.
Fig. 5 is a flowchart of the evaluation information output apparatus 100.
Fig. 6 is a functional block diagram of the evaluation information output apparatus 200.
Detailed Description
(1) Integral structure of evaluation information output device
Fig. 2 shows an evaluation information output apparatus 100 according to the present embodiment. The evaluation information output apparatus 100 is realized by a computer. The evaluation information output apparatus 100 includes a first information acquisition unit 101 and an output unit 102. The first information acquisition unit 101 acquires information 11 concerning the nodding operation. The output unit 102 has a learned model 40, and uses the learned model 40 to estimate and output the evaluation information 21 of the head operation from the information 11 on the head operation.
(2) Detailed structure of evaluation information output device
(2-1) first information acquisition section
The first information acquisition unit 101 acquires information 11 concerning the nodding operation. The information 11 on the nodding motion is the number of nodding motions. In the present embodiment, the first information acquisition unit 101 acquires the number of times of the tapping operation using a seat surface sensor (not shown).
The number of nodding actions is measured using a seat surface sensor (pressure sensor). Participants to the conference sit on a chair provided with a seat surface sensor.
First, the position of the center of gravity P and the full load WT are obtained based on the magnitudes of pressures of seat surface sensors provided at the four corners of the chair. The seat surface sensor is caused to learn the change in the magnitudes of the center of gravity P and the full load WT by machine learning, and detects whether or not the nodding motion is met. The seat surface sensor detects the moment of nodding the person according to the change of the gravity center weight.
The seat surface sensor divides the time in the conference into a plurality of frame units and calculates the number of nodding actions, i.e., the nodding amount. A judgment is made as to whether each participant of the conference nods or not in units of frames. If the total of frames of the k-point header of the participant is S k Then, the nodding amount a at a certain time t for the participant k is calculated by the following equation 1 kt
Formula 1:
for example, in the case where the number of participants is 6, the number of head points of each frame is calculated by adding up frames allocated to each participant for 6 total members. Head quantity A at a certain time t t Calculated by the following equation 2.
Formula 2:
as shown in expression 1, by weighting the reciprocal of the total of frames of the nods for each participant, when the nodding amount is calculated by expression 2, the influence of nodding by a person with a large number of nods is made small, and the influence of nodding by a person with a small number of nods is made large.
(2-7) an output portion
The output unit 102 has a learned model 40, and uses the learned model 40 to estimate and output the evaluation information 21 of the head operation from the information 11 on the head operation. The output unit 102 can use a processor such as a CPU or GPU. The output unit 102 reads a program stored in a storage device (not shown), and performs predetermined image processing and arithmetic processing according to the program. The output unit 102 can write the calculation result to the storage device or read information stored in the storage device according to a program. The storage device may be used as a database.
(3) Learning process
The learned model 40 used by the evaluation information output apparatus 100 will be described. The learned model 40 uses an estimation model that is learned in advance by the learning device 1 from a learning data set including information on the nodding operation and evaluation information of the nodding operation.
Fig. 1 shows a learning device 1 according to the present embodiment. The learning device 1 is realized by a computer. The learning device 1 has a learning unit 2.
The learning unit 2 learns the information 10 on the nodding operation and the evaluation information 20 of the nodding operation as a learning data set 30. In the present embodiment, the information 10 on the nodding operation is the number of times of nodding operation (nodding amount). The evaluation information 20 of the nodding operation is an evaluation for an object that is a cause of the nodding operation. The target that is the cause of the nodding operation is the speech in the conference. The utterances in the conference are related to the creatives created in the conference.
In the learning process, the information 10 on the head movement is acquired using the seat surface sensor. The evaluation information 20 of the click operation is obtained by manual investigation.
The learning model 40 is obtained by learning the information 10 on the nodding operation and the evaluation information 20 on the nodding operation by the learning data set 30.
Fig. 3 shows an example of learning data. In this example, the number of times (the amount of nodding) of nodding operations of the participants of the conference when the participants of the conference create the creative in the conference is taken as the information on the nodding operations. The average value of the survey results related to the creative created in the meeting is used as the evaluation information of the nodding action. The survey was conducted with 5 levels of evaluation of creatives to participants in the meeting.
As shown in fig. 3, by going to be directed to time t in the meeting 1 Creative X is produced 1 Spot quantity A of (2) 1 The training data set is learned with the average value "2" of the survey results as an input and the output. In addition, by targeting time t in the meeting 2 Creative X is produced 2 Spot quantity A of (2) 2 The training data set is learned with the average value "3" of the survey results as an input and the output. In addition, by targeting time t in the meeting 3 Creative X is produced 3 Spot quantity A of (2) 3 The training data set is learned with the average value "4" of the survey results as an input and the output. In addition, by targeting time t in the meeting n The generated creative Xn nodding amount An is used as input, and the average value '2' of the investigation results is used as output training data set for learning.
Fig. 4 shows an example of a scatter diagram created by taking the vertical axis as the total amount of the header obtained by summing up the frames of the header of all the participants and the horizontal axis as the average value of the survey results.
In the present embodiment, 13 groups of 6 participants participate in a conference. The total amount of the header of each group is calculated by simply summing up the frames of all the headers of the participants of each group. In addition, whether the creative has an impression is investigated. The investigation was rated on 5 scale.
As shown in fig. 4, the average value of the survey results positively correlates with the total head count. For the average value and total head amount of the investigation result, correlation analysis was performed using Spearman's rank correlation coefficient (Spearman's rank correlation coefficient), and the result p >.05. The absolute value of the correlation coefficient was 0.49, and the correlation was seen. Since the average value of the survey results has a correlation with the total nodding amount, it means that there is not only an instantaneous relationship but also a medium-long term relationship between the quality of the creative and the nodding.
(4) Evaluating the overall operation of an information output device
Fig. 5 shows a flowchart of the evaluation information output apparatus 100. In this embodiment, a case will be described in which the evaluation information output apparatus 100 is used in a conference.
First, a conference is started in step S1. Attendees of the meeting are photographed by cameras. In addition, attendees of the meeting sit on chairs provided with pressure sensors. The number of times (the click amount) of the click actions of the attendees of the conference is acquired by the first information acquisition unit 101 (step S2). The output unit 102 uses the learned model 40 to estimate an average value of the survey results from the number of times of nodding actions of attendees of the conference (step S3). Next, the output unit 102 outputs the average value of the survey results estimated in step S3 to a display (not shown) (step S4). The display displays the average of the estimated survey results.
(5) Features (e.g. a character)
(5-1)
The learning device 1 according to the present embodiment is a learning device 1 for learning a nodding motion of one or more persons, and includes a learning unit 2. The learning unit 2 generates a learned model 40, and the learned model 40 receives information on the head movement as input and outputs evaluation information of the head movement. The learning unit 2 learns the information 10 on the nodding operation and the evaluation information 21 on the nodding operation as the learning data set 40, and generates a learned model 30.
In the learning device 1, the information 10 on the click action, which is the reaction of the attendees such as the conference, and the evaluation information 20 of the click action are learned as the learning data set 30, whereby the learning model 40 capable of evaluating the quality of the collective activity such as the conference can be generated.
(5-2)
The evaluation information output apparatus 100 according to the present embodiment is an evaluation information output apparatus 100 that outputs evaluation information on a head-turning operation by one or more persons, and includes a first information acquisition unit 101 and an output unit 102. The first information acquisition unit 101 acquires information 11 concerning the nodding operation. The output unit 102 has the learned model 40 that receives the information 11 on the click operation as input and outputs the evaluation information 21 of the click operation. The output unit 102 uses the learned model 40 to estimate and output the estimation information 21 of the click operation from the information 11 on the click operation. The learning model 40 is obtained by learning information 10 on the nodding operation and evaluation information 20 on the nodding operation as a learning data set 30.
In the evaluation information output apparatus 100, the learned model 30 is used to estimate evaluation information of the nodding operation from the information 11 on the nodding operation, which is the reaction of the attendees such as the conference, thereby enabling the quality evaluation of the collective activity such as the conference.
The output unit 102 of the evaluation information output apparatus 100 can present the intellectual productivity score of the conference as the evaluation information of the nod operation based on the amount of nod operation along the elapsed time of the conference. The evaluation information output apparatus 100 evaluates the intellectual productivity of the conference based on the nodding actions of the participants and the time of day.
Further, the synchronicity of the participants may be evaluated based on the time of the nodding of the participants and the length of the nodding. By evaluating the nod after the utterance, the contribution degree of the utterance in the conference can be evaluated.
(5-3)
In the evaluation information output apparatus 100 according to the present embodiment, the evaluation information 21 of the nodding operation is an evaluation of an object that is a cause of the nodding operation. The evaluation of the object that is the cause of the nodding operation is the result of a manual investigation performed on a person.
In the evaluation information output apparatus 100, the quality of the nodding operation can be determined by obtaining an evaluation of an object that is a cause of the nodding operation in a survey.
(5-4)
In the evaluation information output apparatus 100 of the present embodiment, the information 11 on the click operation includes the number of times of the click operation.
In the evaluation information output apparatus 100, information on the reactions of attendees such as conferences can be obtained by measuring the number of times of nodding operations and calculating the nodding amount.
(5-5)
In the evaluation information output apparatus 100 according to the present embodiment, the first information acquisition unit 101 acquires information 11 concerning the nodding operation, which is obtained by identifying the nodding operation using the seat surface sensor.
In the evaluation information output apparatus 100, the number of times 11 of the nodding operation, which is the posture in the conference, can be easily measured by using the seat surface sensor.
(6) Modification examples
(6-1) modification 1A
In the evaluation information output apparatus 100 of the present embodiment, the case where the first information acquisition unit 101 and the output unit 102 are included has been described, but the second information acquisition unit may be included.
Fig. 6 shows an evaluation information output apparatus 200 of modification 1A. The evaluation information output apparatus 200 is implemented by a computer. The evaluation information output apparatus 200 includes a first information acquisition unit 201, a second information acquisition unit 203, and an output unit 202. The first information acquisition unit 201 acquires information 11 concerning the nodding operation. The second information acquisition unit 203 acquires information 50 related to an object that is a cause of the nodding operation.
The information 50 related to the object that is the cause of the nodding operation is the utterance in the conference. In the evaluation information output apparatus 200 according to modification a, the second information acquisition unit 203 captures a moving image using a camera (not shown), and acquires a creative, which is a speech in a conference, from the moving image.
The output unit 202 outputs data relating to the information 50 relating to the object that is the cause of the nodding operation and the evaluation information 21 of the nodding operation as output data 60. Further, the output unit 202 may output evaluation data obtained by combining meeting recording information of a meeting developed in time series and the mental productivity score as the evaluation information 21 of the nodding motion.
In this evaluation information output apparatus 200, by correlating and outputting information 50 related to an object that is a cause of the nod operation with the evaluation information 21 of the nod operation, it is possible to obtain the evaluation information of the nod operation performed for a specific object.
(6-2) modification 1B
In the evaluation information output apparatus 100 of the present embodiment, the case where the information 11 on the nodding operation is the number of nodding operations has been described, but may be the cycle or the amplitude of nodding operations.
The first information acquisition unit 101 may acquire information related to the nodding including the time when the nodding operation is performed. By acquiring the time when the nodding operation is performed, the number of participants in the nodding can be grasped. The more people nod, the higher the likelihood of creating a creative before nodding. There are more nodding actions when creating creatives than when there is no conversation.
The information 11 on the nodding operation may be the timing of nodding operation such as whether the participant is nodding immediately or delayed after speaking in the conference, the intensity of nodding operation, the presence or absence of nodding persons, the number of people, or the like.
The characteristics of the nod can be determined using the time from creation to occurrence of the nod operation, the length (frequency) of the nod operation, the number of times of the nod operation (frequency per unit time), and the like. Since a moving image is captured from the creation of a creative to the creation of a click, the time difference from the speaking time is extracted to be digitized. The frequency of the nodding operation is digitized by extracting the frequency included in the image.
By discriminating the characteristics of the nod, the concentration of the participants in the conference and the intensity of the conference can be discriminated.
(6-3) modification 1C
In the evaluation information output apparatus 100 of the present embodiment, the case where the information 11 on the nodding operation, that is, the number of nodding operations is the total nodding amount of each group of participants of the conference has been described, but the present invention is not limited thereto. The number of times of the nodding operation of each person of the participants of the conference may be used as the information 11 concerning the nodding operation.
(6-4) modification 1D
In the evaluation information output apparatus 100 of the present embodiment, the case where the evaluation information 21 of the nodding operation is the result of manual investigation performed on a person, and investigation is performed on whether or not an impression is present in a creative created in a meeting has been described, but the present invention is not limited thereto. It is also possible to investigate whether the creative created in the meeting is original, acceptable, or socially acceptable.
(6-5) modification 1E
In the evaluation information output apparatus 100 of the present embodiment, the case where the target that is the cause of the nodding operation is the idea of speaking in the conference has been described, but the present invention is not limited thereto. The object that becomes the cause of the nodding motion may also include at least one of data or objects prompted in the presentation, and performances of a human or animal.
(6-6) modification 1F
In the evaluation information output apparatus 100 of the present embodiment, the case where the evaluation information 21 of the nod operation is the evaluation related to the object that is the cause of the nod operation has been described, but the evaluation information may be information related to the emotion of the person. The information related to the emotion of the person includes information related to happiness, or repudiation of the person, which may be the cause of the nodding motion. The evaluation information of the nod operation can be obtained by analysis of image data or analysis of sound data of the face of the person.
By estimating the intention of the person to nod on the basis of the information 11 on the nod operation, the quality of the nod can be determined.
(6-7) modification 1G
In the evaluation information output apparatus 100 according to the present embodiment, the case where the first information acquisition unit 101 acquires information on the head movement from the seat surface sensor has been described, but the present invention is not limited thereto. Information about the nodding motion obtained by recognizing the nodding motion using a motion capture device, an acceleration sensor, a depth camera, or a moving image of a person may be acquired.
The motion capture device can use an OptiTrack or other motion capture device to estimate the movement of the head. Further, as the acceleration sensor, JIN Mame and the like are exemplified. An acceleration sensor can be used to capture the motion of the head. The number of times of the nodding operation can be measured using a depth camera or a Kinect sensor. Further, the nodding motion can be detected from the motion of the head based on the moving image of the participant in the conference.
(6-8) modification 1H
While the embodiments of the present disclosure have been described above, it should be understood that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as set forth in the following claims.
Description of the reference numerals
1 learning device
2 study part
100. 200: an evaluation information output device;
101. 201: a first information acquisition unit;
203: a second information acquisition unit;
102. 202: an output unit;
10. 11: information related to nodding motion;
20. 21: evaluation information of the nod action;
30: a learning data set;
40: a model is learned;
50: information related to an object that is a cause of the nodding operation.
Prior art literature
Patent literature
Patent document 1: japanese patent application laid-open No. 2019-211962

Claims (10)

1. A learning device (1) for learning the nodding action of one or more persons,
the learning device (1) has a learning unit (2), the learning unit (2) generates a learned model (40), the learned model (40) takes information (10) related to the nodding operation as input, outputs evaluation information (20) of the nodding operation,
the learning unit learns information on the nodding operation and evaluation information of the nodding operation as a learning data set (30) and generates the learned model.
2. An evaluation information output device (100, 200) for outputting evaluation information on a nodding motion of one or more persons based on the nodding motion,
the evaluation information output device includes:
first information acquisition units (101, 201) that acquire information (11) relating to the nodding operation; and
an output unit (102, 202) having a learned model that receives information on the click operation as input and outputs evaluation information on the click operation, wherein the output unit uses the learned model to estimate and output the evaluation information (21) on the basis of the information on the click operation,
the learning model is obtained by learning information on the nodding operation and evaluation information of the nodding operation as a learning data set.
3. The apparatus for outputting evaluation information according to claim 2, wherein,
the evaluation information output device further comprises a second information acquisition unit (203), wherein the second information acquisition unit (203) acquires information (50) related to an object that is the cause of the nodding operation,
the object that becomes the cause of the nodding motion includes at least one of a speech in a meeting, data or objects presented in a presentation, and a performance of a person or animal,
the output unit outputs information on an object that is a cause of the nodding operation in association with the evaluation information of the nodding operation.
4. The apparatus for outputting evaluation information according to claim 2 or 3, wherein,
the evaluation information of the nodding motion is information related to the emotion of the person,
the information related to the emotion of the person includes information related to happiness and fun of the person or information related to whether the person approves the object that is the cause of the nodding motion,
the evaluation information of the nodding operation can be obtained by analysis of image data or analysis of sound data of the face of the person.
5. The apparatus for outputting evaluation information according to claim 2 or 3, wherein,
the evaluation information of the nodding motion is an evaluation for an object that becomes a cause of the nodding motion,
the evaluation of the object that is the cause of the nodding operation is the result of a manual investigation performed on the person.
6. The evaluation information output apparatus according to any one of claims 2 to 5, wherein,
the information about the nodding motion includes at least one of the number, period, and amplitude of the nodding motion.
7. The evaluation information output apparatus according to any one of claims 2 to 5, wherein,
the information on the nodding motion includes at least one of a time of the nodding motion, an intensity of the nodding motion, whether the same person exists among the plurality of persons as the time of the nodding motion, and a number of persons as the time of the nodding motion.
8. The apparatus for outputting evaluation information according to claim 3, wherein,
the information about the nodding motion includes a time from after the utterance in the conference until the nodding motion occurs.
9. The evaluation information output apparatus according to any one of claims 2 to 5, wherein,
the information on the nodding motion includes at least one of a frequency of the nodding motion and a frequency of the nodding motion per unit time.
10. The evaluation information output apparatus according to any one of claims 2 to 9, wherein,
the first information acquisition unit acquires information on the nodding motion, which is obtained by recognizing the nodding motion using a seat surface sensor, a motion capture device, an acceleration sensor, a depth camera, or a moving image of the person.
CN202280013277.0A 2021-02-05 2022-02-07 Learning device and evaluation information output device Pending CN116888606A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021-017261 2021-02-05
JP2022001614A JP7260826B2 (en) 2021-02-05 2022-01-07 Learning device and evaluation information output device
JP2022-001614 2022-01-07
PCT/JP2022/004684 WO2022168973A1 (en) 2021-02-05 2022-02-07 Learning device and evaluation information output device

Publications (1)

Publication Number Publication Date
CN116888606A true CN116888606A (en) 2023-10-13

Family

ID=88264875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280013277.0A Pending CN116888606A (en) 2021-02-05 2022-02-07 Learning device and evaluation information output device

Country Status (1)

Country Link
CN (1) CN116888606A (en)

Similar Documents

Publication Publication Date Title
Bailenson et al. The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction
JP6467965B2 (en) Emotion estimation device and emotion estimation method
US9848170B2 (en) System and method for improving communication productivity
US20180078184A1 (en) Dual-task performing ability evaluation method and dual-task performing ability evaluation system
JP6715410B2 (en) Evaluation method, evaluation device, evaluation program, and evaluation system
JP2019055064A (en) Diagnosis support information providing device and diagnosis support information providing method
WO2022025200A1 (en) Reaction analysis system and reaction analysis device
CN112163760A (en) Student learning concentration degree detection method and system
JP7370050B2 (en) Lip reading device and method
JP6709868B1 (en) Analysis method, analysis system, and analysis program
Palazzi et al. Spotting prejudice with nonverbal behaviours
CN116888606A (en) Learning device and evaluation information output device
JP7260826B2 (en) Learning device and evaluation information output device
WO2022168973A1 (en) Learning device and evaluation information output device
WO2023084715A1 (en) Information processing device, information processing method, and program
Masmoudi et al. Meltdowncrisis: Dataset of autistic children during meltdown crisis
WO2022025025A1 (en) Emotion analysis system and emotion analysis device
US20190213403A1 (en) Augmented reality predictions using machine learning
JP7014761B2 (en) Cognitive function estimation method, computer program and cognitive function estimation device
CN109697413B (en) Personality analysis method, system and storage medium based on head gesture
WO2022064622A1 (en) Emotion analysis system
JP7198892B2 (en) Video playback device, video playback method, and video distribution system
JP6945693B2 (en) Video playback device, video playback method, and video distribution system
JPWO2023062794A5 (en)
JP2023038843A (en) Terminal, learning method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination