WO2020162272A1 - Information processing device and information processing method - Google Patents

Information processing device and information processing method Download PDF

Info

Publication number
WO2020162272A1
WO2020162272A1 PCT/JP2020/003068 JP2020003068W WO2020162272A1 WO 2020162272 A1 WO2020162272 A1 WO 2020162272A1 JP 2020003068 W JP2020003068 W JP 2020003068W WO 2020162272 A1 WO2020162272 A1 WO 2020162272A1
Authority
WO
WIPO (PCT)
Prior art keywords
learner
information
unit
guidance
information processing
Prior art date
Application number
PCT/JP2020/003068
Other languages
French (fr)
Japanese (ja)
Inventor
一希 笠井
慎 江上
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019046234A external-priority patent/JP7099377B2/en
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2020162272A1 publication Critical patent/WO2020162272A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/12Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations different stations being capable of presenting different information simultaneously
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • G09B7/04Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying a further explanation

Definitions

  • the present invention relates to an information processing device and an information processing method.
  • Patent Literature 1 describes a technique of comparing a result of monitoring a readback frequency and a gaze stop of a learner with other learners and presenting the comparison result to the learner and the instructor.
  • Patent Document 1 is for evaluating the degree of understanding of the learner, and whether or not the learner needs the instruction needs to be examined by the learner or the instructor himself based on the evaluation result. There is.
  • An aspect of the present invention is to realize a learning support system that visualizes a learning process of a learner and determines necessity of instruction for the learner to extract a learner who needs instruction by the system.
  • an information processing apparatus refers to a face information acquisition unit that acquires face information including information of at least a part of a learner's face, and the face information.
  • the information processing method refers to a face information acquisition step of acquiring face information including information of at least a part of a learner's face, and the face information. Then, depending on the combination of the visual identification target identifying step of identifying the plurality of visual identification objects visually recognized by the learner, and the combination of the visual identification objects identified in the visual identification target identifying step, it is necessary or unnecessary to instruct the learner. And a guidance determining step of determining a guidance necessity level.
  • an information processing program is an information processing program for causing a computer to function as any one of the information processing devices, the face information acquisition unit, A computer is caused to function as the visually recognizable object specifying unit and the guidance judging unit.
  • an information processing apparatus refers to a face information acquisition unit that acquires face information including information of at least a part of a learner's face, and the face information. Then, the visual recognition target specifying unit that specifies a plurality of visual recognition target objects visually recognized by the learner, and a visualization information generation unit that generates visualized information indicating the visual recognition target object specified by the visual recognition target specifying unit. It is characterized by having.
  • the information processing apparatus 100 is a learning support system that visualizes a learning process of a learner and determines whether or not guidance is required for the learner, thereby extracting a learner who needs guidance by the system.
  • FIG. 1 is a block diagram showing components of an information processing device 100 according to an embodiment of the present invention.
  • the information processing device 100 includes a face information acquisition unit 1, a visual identification target identification unit 2, and a guidance determination unit 3.
  • the information processing device 100 may include a state detection unit 4, a learner information acquisition unit 5, a storage unit 6, a display unit 7, and an image acquisition unit 8.
  • the visual recognition target identification unit 2, the guidance determination unit 3, and the state detection unit 4 are realized as one configuration of the control unit 9 of the information processing device 100.
  • the information processing apparatus 100 can visualize the learning process of the learner and determine whether the learner needs the instruction.
  • the information processing apparatus 100 is mounted on, for example, a terminal device (for example, a smartphone, a tablet terminal, a personal computer, a television receiver, etc.) that can be connected to a local network or a global network.
  • a terminal device for example, a smartphone, a tablet terminal, a personal computer, a television receiver, etc.
  • the face information acquisition unit 1 acquires the learner's face information from an image acquired from an imaging unit such as a camera, for example.
  • the learner's face information is information indicating the face feature amount.
  • the feature amount of the face indicates, for example, position information indicating the position of each part of the face (for example, eyes, nose, mouth, eyebrows, etc.), shape information indicating the shape, size information indicating the size, and the like.
  • position information indicating the position of each part of the face (for example, eyes, nose, mouth, eyebrows, etc.), shape information indicating the shape, size information indicating the size, and the like.
  • the eye information includes, for example, the corner points of the inner and outer corners of the eye, the edges of the iris and the pupil, and the like.
  • the face information acquisition unit 1 may appropriately perform correction processing such as noise reduction and edge enhancement on the image acquired from the imaging unit.
  • the face information acquisition unit 1 transmits the extracted face information to the state detection unit 4.
  • the state detection unit 4 detects the state of the learner based on the face information extracted by the face information acquisition unit 1. After detecting the learner's state, the state detection unit 4 transmits information about the detection result to the visual recognition target identification unit 2.
  • the state of the learner detected by the state detection unit 4 is, for example, the state of each part of the face of the learner, and the line of sight of the learner, the state of the pupil, the number of blinks, the movement of the eyebrows, the movement of the cheek, At least one of eyelid movement, lip movement, and jaw movement.
  • the state detection unit 4 calculates the position information and the shape indicating the position of each part of the face (for example, eyes, nose, mouth, cheeks, eyebrows, etc.) which is the feature amount of the face acquired by the face information acquisition unit 1.
  • the shape information and the size information indicating the size, etc. as the state of the learner, for example, the learner's line of sight, the state of the pupil, the number of blinks, eyebrow movement, cheek movement, eyelid movement, At least one of lip movement and jaw movement may be detected.
  • the information processing apparatus 100 can preferably detect the state of the learner.
  • the method for detecting the line of sight is not particularly limited, but for example, by providing a point light source (not shown) in the information processing apparatus 100 and capturing a corneal reflection image of light from the point light source for a predetermined time by the image capturing unit, learning is performed. There is a method of detecting the moving destination of the line of sight of the person.
  • the type of point light source is not particularly limited, and examples thereof include visible light and infrared light. By using, for example, an infrared LED, it is possible to detect the line of sight without making the learner feel uncomfortable.
  • detecting the line of sight if the line of sight does not move for a predetermined time or more, it can be said that the user is gazing at the same place.
  • the method of detecting the state of the pupil is not particularly limited, but for example, a method of detecting a circular pupil from the image of the eye by using Hough transform can be cited.
  • a method of detecting a circular pupil from the image of the eye by using Hough transform can be cited.
  • humans tend to open their eyes when they are concentrated. Therefore, by detecting the size of the pupil, the degree of concentration of the learner can be evaluated. For example, it can be said that there is a high possibility that the learner is gazing at an object during the time when the size of the pupil is detected for a predetermined time and the pupil is growing within the predetermined time.
  • a threshold may be set, and when the size of the pupil is equal to or larger than the threshold, it may be evaluated as “open”, and when the size of the pupil is smaller than the threshold, evaluation may be made as “closed”.
  • the method of detecting the number of blinks is not particularly limited, for example, by irradiating the learner's eyes with infrared light, to detect the difference in the amount of infrared light reflection between when the eyes are open and when the eyes are closed. And the like.
  • a person when concentrating, he or she tends to blink at a stable interval at a low frequency, so that the degree of concentration of the learner can be evaluated by detecting the number of blinks. For example, if the number of blinks is detected for a predetermined time and the blinks are performed at stable intervals within the predetermined time, it can be said that the learner is likely to be gazing at a target.
  • the state detection unit 4 detects at least one of the learner's line of sight, the state of the pupil and the number of blinks, eyebrow movement, eyelid movement, cheek movement, nose movement, lip movement, and jaw movement. However, it is preferable to combine these. By combining the detection methods in this way, the state detection unit 4 can appropriately evaluate the degree of concentration of the learner while visually recognizing a certain object.
  • the movement of the eyebrow such as lifting the inside of the eyebrow or raising the outside of the eyebrow
  • the movement of the eyelid such as raising the upper eyelid and the tension of the eyelid
  • the movement of the nose such as wrinkling the nose.
  • the state of each part of the face such as the movement of the lips such as lifting the upper lip and the purging of the lips, the movement of the cheeks such as lifting the cheeks, the movement of the jaws such as lowering the chin.
  • the states of a plurality of parts of the face may be combined.
  • FIG. 2 is a diagram illustrating an example of a screen displayed by the display unit 7 included in the information processing apparatus 100. At least one of a question sentence and a question sentence is contained in a visual recognition target object.
  • the visual recognition target object includes at least one specific part in the question sentence and the question sentence. Here, the specific part includes an underlined portion added to the question sentence or the question sentence.
  • the visual recognition target objects visually recognized by the learner are a question sentence, a question sentence, an underlined portion, and an option.
  • the display unit 7 includes, for example, a question sentence display area 10, a question sentence display area 11, and an option display area 12. Then, the question sentence display area 10 includes an underlined area 13 in which an underline is displayed.
  • the visual target specifying unit 2 first acquires the coordinates of these areas on the display screen from the image acquiring unit 8 described later.
  • the visual recognition target identification unit 2 collates the acquired coordinates of each region with the line-of-sight information acquired from the state detection unit 4, and identifies which region the coordinates of the learner's line-of-sight position are within. .. For example, as shown in FIG. 2, when the coordinates of the learner's line-of-sight position 14 is specified to be within the coordinates of the underlined area 13 of the question sentence display area 10, the visual recognition target specifying unit 2 determines that the learner is The counting of the time in which the coordinates of the line-of-sight position of stays within the coordinates of the underlined area 13 is started.
  • the visual recognition target identification unit 2 identifies which visual recognition target object the learner is visually recognizing by collating the coordinates of the gaze position of the learner with the coordinates of each area of the display unit 7. .. Further, the visual target specifying unit 2 continuously specifies the visual target while the one problem is displayed on the display screen, and thereby specifies the combination of the visual target visually recognized by the learner regarding the problem. be able to.
  • the visual target specifying unit 2 transmits information regarding the specified visual target to the guidance determining unit 3.
  • the instruction determination unit 3 determines the instruction necessity or the instruction necessity level for the learner according to the combination of the visual target objects visually recognized by the learner, which are identified by the visual target specifying unit 2.
  • the learner When the learner answers the question as shown in Fig. 2, there is a method of reading the question sentence and then the question sentence before answering, or a method of reading the question sentence first and then reading the question sentence before answering. In addition, there are various learning processes until the learner gets an answer. In addition, the learner may recognize an underlined portion in the question sentence or question sentence as an important portion, and may read the underlined portion before reading the entire question sentence or question sentence. In multiple choice questions that select and answer alternatives, reading from alternatives can be considered. It is also possible that the learner selects an option without reading the question sentence.
  • the learning process by the learner is various, for example, the instruction need is high for an undesirable learning process, and the instruction is low for a desirable learning process. Needs to judge the necessity of teaching and the teaching method according to the learning process.
  • the information processing device 100 visualizes the learning process of the learner by identifying the visual target object visually recognized by the learner, and determines the necessity of instruction or the level of instruction necessity for the learner according to the combination of the visual target objects. To do. As described above, by determining the necessity of instruction or the level of instruction necessity according to the combination of the visual recognition objects visually recognized by the learner, the learner needing the instruction can be extracted by the system to support the learning.
  • the instruction determination unit 3 determines the necessity of instruction or the level of instruction required for the learner according to the combination of the visual target objects identified by the visual target identification unit 2. As an example, the instruction determination unit 3 refers to reference data stored in advance in the information processing device 100 to determine the instruction necessity or the instruction necessity level. Further, the guidance determination unit 3 may determine the guidance necessity or the guidance necessity level according to a predetermined rule.
  • FIG. 3 is a diagram showing an example of reference data acquired by the guidance determination unit 3.
  • the reference data is stored in the storage unit 6.
  • “subject” is a subject learned by the learner
  • “level” is the level of necessity of learning by the learner
  • “personality” is the personality of the learner
  • “permutation” is The combination order of the visually recognized objects is shown.
  • the reference data shown in FIG. 3 includes data for each subject.
  • the reference data may not include the item of the subject.
  • the level of instruction requirement indicates the degree of instruction required for the learner according to the learning process of the learner. For example, the necessity level of guidance for the learner is shown in five stages, and the necessity level of guidance is set to be lowest at level 1 of necessity of guidance, and the level of necessity of guidance is set to be highest at level 5 of necessity of guidance. It should be noted that the accumulated determination result of the instruction determination unit 3 may be fed back to update the instruction necessity level associated with the permutation. It should be noted that the reference data may include, instead of the “guidance necessity level”, only the guidance necessity of the guidance necessity or the guidance necessity, and may include an item of the “guidance necessity” which is not divided into levels.
  • the “Personality” item represents groups divided into groups according to the characteristics of the learner's personality.
  • the instruction determination unit 3 further refers to the learner information of the learner to determine the instruction necessity or the instruction necessity level for the learner. Depending on the learner's personality, desirable or undesirable learning processes may differ. Therefore, the reference data shown in FIG. 3 includes data for each personality of the learner. For example, learners belonging to the group A of learners having a discreet personality and learners belonging to the group B of learners having a distracted personality may have different degrees of instructional needs according to the learning process.
  • the learner with the personality A is at the level 1 of the necessity of instruction and needs the instruction when learning is performed in the order of question sentence-problem sentence-underlined part-option. Although the degree is low, learners with personality B are at the level 3 of necessity of instruction, and the need for instruction is higher. Even in the same learning process, the necessity of instruction and the necessity of instruction are determined according to the character of the learner. Fail level is different. If the learner's personality is not considered in the determination of the instruction necessity level, or if the instruction need level is not affected by the learner's personality, the reference data may not include the personality item. ..
  • Permutation is data in which the combinations of visible objects are arranged by distinguishing the difference in the visual order.
  • the reference data may include a “combination” item that represents a combination of visual recognition target objects and does not distinguish the difference in the visually recognized order, instead of the “permutation”.
  • the “permutation” may be only the data consisting of these four visual target objects, or the number of the visual target objects. It may be omitted, and the data may include only a part of the visual recognition target.
  • the “permutation” may be a permutation in which the underlined part is omitted, as in the case where only the question sentence, the question sentence, and the options are viewed and the underlined part is not seen.
  • the “permutation” may be data in which some of the visual recognition target objects are duplicated.
  • the “permutation” may be a permutation in which the question sentences are duplicated, as in the case of looking at the question sentence after seeing the question sentence and then looking at the question sentence again.
  • FIG. 4 is a diagram illustrating an example of guidance necessity data output by the guidance determination unit 3.
  • subject is a subject learned by the learner
  • problem is a problem learned by the learner
  • ID is information for identifying the learner
  • personality is information for identifying the learner
  • personality is information for identifying the learner
  • Personality is information for identifying the learner
  • Personality is information for identifying the learner
  • Personality is information for identifying the learner
  • “personality” “Is the personality of the learner
  • Permutation is the permutation of the visual recognition target specified by the visual recognition target specifying unit 2
  • Teaching necessity level is the guidance necessity level determined by the guidance determining unit 3. Is shown.
  • the instruction determination unit 3 determines the instruction necessity level of each learner for each problem.
  • the instruction necessity data shown in FIG. 4 shows the result of determining the instruction necessity level for each of the learners 0001, 0002, and 0003 for the problem 0001.
  • the items of “personality” and “permutation” of the guidance necessity data shown in FIG. 4 are linked to the items of “personality” and “permutation” of the reference data shown in FIG. That is, the teaching determination unit 3 identifies the personality (A) of the learner 0001 from the learner information acquired from the learner information acquisition unit 5, and learns from the information regarding the visual recognition target object acquired from the visual recognition target specification unit 2.
  • Person 0001 specifies the permutation (underlined part-choice-question sentence-question sentence) when the person 0001 learned question 0001, refers to the reference data acquired from the storage unit 6, and finds the data with which the specified personality and permutation are associated. It is extracted and the level of instruction necessity is determined to be 5.
  • the teaching determination unit 3 specifies the personality (B) and permutation (question sentence-question sentence-underlined portion-choice) for the learner 0002, refers to the reference data, and sets the teaching necessity level to 2. judge.
  • the teaching determination unit 3 specifies the personality (A) and the permutation (question sentence-question sentence-underlined portion-choice) for the learner 0003, refers to the reference data, and determines the teaching necessity level to be 1. To do.
  • the instruction determination unit 3 refers to the reference data and determines the instruction necessity or the instruction necessity level according to the order in which the visually recognized objects are visually recognized.
  • the predetermined rule is, for example, a point-decreasing method or a point-adding method rule such as -10 points when the underlined portion is first seen and -5 points when the question sentence is first seen. According to this rule, if the score is -10 or more, the level of instruction need is 5 with the highest need for instruction, and if the score is 0, the level of instruction is 1 with the need for instruction. Is also predetermined.
  • the teaching determination unit 3 applies the predetermined rule acquired from the storage unit 6 to the permutation of the visual recognition target identified by the visual recognition target identification unit 2. It is applied and graded, and the level of instruction necessity is determined based on the graded result. For example, the teaching determination unit 3 sets the score of -10 for the learner 0001 because the underlined portion is first seen, and determines that the level is 5 for the necessity of teaching.
  • the guidance determination unit 3 may further determine the following items to determine the guidance necessity or the guidance necessity level, for example.
  • the guidance determination unit 3 may perform processing such as raising the guidance necessity level.
  • the storage unit 6 stores a program for operating the control unit 9 and reference data and various other data referred to by the guidance determination unit 3.
  • the learner information acquisition unit 5 acquires, for example, learner information of a specific learner registered in advance in a database or the like, and transmits the acquired learner information to the teaching determination unit 3.
  • the learner information includes attribute information indicating learner attributes and learner identification information for distinguishing other learners from the target learner.
  • the learner's attribute information is, for example, the learner's age, sex, personality (generally written, distracted, etc.).
  • the learner identification information is, for example, the learner's ID, the learner's email address, or the like.
  • the learner information includes the learner identification information, so that it is possible to distinguish the other learner from the target learner, and it is possible to determine whether or not the learner needs the instruction or the instruction necessity level.
  • And learner information can be linked.
  • the display unit 7 displays the visual target object visually recognized by the learner. Further, the display unit 7 may display the determination result of the guidance necessity or the guidance necessity level by the guidance determination unit 3.
  • the configuration in which the information processing device 100 includes the display unit 7 has been described as an example, but the display unit may be configured separately from the information processing device 100.
  • display data including an object in which the guidance necessity data is visualized may be displayed.
  • a graph is an example of the display data.
  • the display data may indicate the instruction necessity level of one learner, or may display the average instruction need level of a plurality of learners together. Thereby, the level of instruction need of the learner can be grasped at a glance and the level of instruction need of the learner can be visualized.
  • the display unit 7 may display the guidance necessity or the guidance necessity level by the guidance determination unit 3 and the suggestion information based on the guidance necessity or the guidance necessity level.
  • Proposal information is, for example, information for prompting guidance or information for proposing a guidance method for a problem with a high guidance necessity level.
  • the suggestion information may be a message such as "It is necessary to teach about relative pronouns in English grammar.”, "Please explain about pages 8 to 10 of the textbook again.”
  • the information processing apparatus 100 may further include an image acquisition unit 8 that acquires a captured image including the field of view of the learner from an imaging device such as a camera.
  • the imaging device is not particularly limited as long as it can detect the state of the learner, and examples thereof include a camera mounted on a glasses-type wearable device and a camera mounted on a head-mounted device.
  • the types of these devices may be transmissive or non-transmissive.
  • the non-transmissive type for example, the camera (imaging device) and the display (display unit) to which the learner is visually recognizing may be provided in the same device.
  • the captured image including the learner's field of view is, for example, an image of the scenery in front of the learner, a display (display unit), and the like.
  • the visual recognition target identification unit 2 may identify the visual recognition target object visually recognized by the learner with reference to the captured image acquired by the image acquisition unit 8 and the result detected by the state detection unit 4. .. Specifically, the visual recognition target identification unit 2 collates the coordinates of the captured image area with the learner's line-of-sight information detected by the state detection unit 4, and the coordinates of the learner's line-of-sight position are determined in the captured image. Determine if it corresponds to the coordinates.
  • the visual recognition target specifying unit 2 may be configured to recognize the target object included in the captured image acquired by the image acquiring unit 8 using, for example, machine learning.
  • the control unit 9 centrally controls each unit of the information processing device 100.
  • the control unit 9 includes the visual identification target specifying unit 2, the guidance determining unit 3, and the state detecting unit 4.
  • Each function of the control unit 9 and all the functions included in the information processing apparatus 100 may be realized by the CPU executing a program stored in the storage unit 6 or the like, for example.
  • step S20 use of the information processing apparatus 100 is started, and processing is started (step S20).
  • the process proceeds to step S (hereinafter, “step” is omitted) 21.
  • the learner information acquisition unit 5 acquires the learner information. Details of the processing are as described in (learner information acquisition unit). The learner information acquisition unit 5 transmits the learner information to the teaching determination unit 3, and proceeds to S22.
  • the face information acquisition unit 1 acquires the learner's face image from the imaging unit, and acquires the learner's face information from the face image (face information acquisition step). Details of the processing are as described in (Face information acquisition unit).
  • the face information acquisition unit 1 transmits the acquired face information to the state detection unit 4, and proceeds to S23.
  • the state detection unit 4 detects the learner's line of sight, blink, pupil state, and state of each part of the face based on the face information extracted by the face information acquisition unit 1. The details of the processing are as described in (State detection unit). The detection result is transmitted to the visual target specifying unit 2, and the process proceeds to S24.
  • the visual recognition target specifying unit 2 specifies the visual recognition target object visually recognized by the learner based on the detection result acquired from the state detecting unit 4 (visual recognition target specifying step). The details of the processing are as described in (visual identification target specifying unit). The visual recognition target specifying unit 2 transmits the information regarding the specified visual recognition target object to the guidance determining unit 3, and proceeds to S25.
  • the instruction determination unit 3 acquires reference data from the storage unit 6 and proceeds to S26.
  • the instruction determination unit 3 refers to the information regarding the visual recognition target object acquired from the visual recognition target identification unit 2 and the reference data acquired from the storage unit 6 to determine the necessity of instruction or the level of instruction necessity for the learner. Yes (instruction determination step). The details of the processing are as described in the (teaching determination unit).
  • the control unit 9 may store the guidance necessity or guidance necessity level determined by the guidance determination unit 3 in the storage unit 6.
  • the guidance determination unit 3 transmits the determined guidance necessity or guidance necessity level to the display unit 7, and proceeds to S27.
  • the display unit 7 displays the guidance necessity level or the guidance necessity level acquired from the guidance determination unit 3, proceeds to S28, and ends the process.
  • FIG. 6 is a block diagram showing a main configuration of the guidance determination unit 3 of the information processing apparatus 100 according to the embodiment of the present invention
  • FIG. 7 is a guidance of the information processing apparatus 100 according to the embodiment of the present invention. It is a figure which shows an example of the target information which the determination part 3 acquires.
  • the teaching determination unit 3 includes a feature amount conversion unit 31, an estimation result calculation unit 32, and a teaching method classification unit 33.
  • the feature amount conversion unit 31 calculates the feature amount regarding the learner by referring to the order in which the visually recognized objects are viewed and the learner information of the learner.
  • the feature amount conversion unit 31 calculates a distribution of attention of the learner as a feature amount.
  • the order of visually recognizing the visually recognizable objects corresponds to the “permutation” shown in FIGS. 3 and 4.
  • the learner information of the learner includes the “personality” shown in FIGS. 3 and 4.
  • the feature amount calculated by the feature amount conversion unit 31 corresponds to the “level” shown in FIG.
  • the feature amount conversion unit 31 may calculate the feature amount by referring to the order in which the visually recognized objects are visually recognized and the sensing data including the learner information of the learner.
  • the sensing data includes the learner's line of sight, the learner's personality, the learner's educational history, and the learner's labeling (evaluation) by the instructor.
  • the sensing data is obtained from a video image during the simulation test, coordinate data of the learner's line-of-sight position, and the like.
  • the feature amount calculated by the feature amount conversion unit 31 represents the degree of understanding and motivation of the learner.
  • the feature amount conversion unit 31 calculates the feature amount, for example, a point or the like that the learner has a low understanding level is extracted and recorded.
  • the feature amount conversion unit 31 calculates, as a feature amount, the distribution of the attention of the learner with respect to each visual target object in question, based on the line-of-sight stagnation time of each visual target object in question of the learner. For example, when the choice is longer than the question sentence, the feature amount conversion unit 31 determines that the learner's degree of understanding of this problem is low, and calculates the feature amount representing the determination result. ..
  • the feature amount may be, for example, a five-level evaluation in which the level of understanding is set to level 1 and the level of understanding is set to level 5.
  • the order in which the visual target is visually recognized is specified by the visual target specifying unit 2 and sent to the feature amount conversion unit 31.
  • the learner information is sent from the learner information acquisition unit 5 to the feature amount conversion unit 31.
  • the feature amount conversion unit 31 may further acquire the data stored in the storage unit 6 and use it for calculating the feature amount.
  • the feature amount conversion unit 31 may send the calculated feature amount and sensing data to the estimation result calculation unit 32 and also to the storage unit 6.
  • the estimation result calculation unit 32 calculates the estimation result with reference to the feature amount calculated by the feature amount conversion unit 31.
  • the estimation result calculation unit 32 may calculate the estimation result by referring to the sensing data together with the feature amount.
  • the estimation result calculated by the estimation result calculation unit 32 is a prediction of the learner's performance in the next test, the learner's willingness in future classes, the probability of leaving school, the number of students who take the test, and the like.
  • the estimation result calculation unit 32 estimates that the learner's grade in the next test is good, and if the feature amount is level 5, the learner's grade in the next test is high. Estimated to be bad.
  • the estimation result calculation unit 32 may send the estimation result to the teaching method classification unit 33 as well as to the storage unit 6.
  • the teaching method classification unit 33 refers to the estimation result calculated by the estimation result calculation unit 32 and the target information, and determines the necessity of teaching or the level of necessity of teaching the learner.
  • the teaching method classification unit 33 may refer to the sensing data together with the estimation result and the target information, and determine the necessity of the guidance or the level of the necessity of the guidance for the learner.
  • the teaching method classification unit 33 may send the determination result of the guidance necessity or the guidance necessity level to the display unit 7 and also to the storage unit 6. Since the transmission of information from the guidance determination unit 3 to the display unit 7 is shown in FIG. 1, the display unit 7 is not shown in FIG.
  • the instruction necessity or instruction necessity level determined by the instruction method classification unit 33 corresponds to the “instruction necessity level” shown in FIG.
  • the instruction necessity level represents a determination result such as requiring more strict instruction, changing an instructor, or intensively instructing a specific subject.
  • the instruction necessity level is changed according to the target information.
  • the target information is included in the target information data stored in the storage unit 6 in advance.
  • An example of the target information data is shown in FIG.
  • the goal information data includes the target grades of each subject for each goal set by the learner. For example, if the goal 1 is "passing XX university", the target score of "national language” required to achieve "passing XX university” is 100, and the goal of "mathematics” is 200. The goal of "English” is 150 points.
  • the teaching method classification unit 33 refers to the learner's goal included in the sensing data and the learner's estimation result, and if the difference between the score of the estimation result and the target score is large, the instruction necessity level is increased and the estimation is performed. If the difference between the result score and the target score is small, it may be determined that the instruction necessity level is lowered.
  • the teaching method classifying unit 33 determines a teaching method that intensively improves the mathematics results and the selection of a teacher who is excellent in teaching mathematics.
  • the difference between the score of the English estimation result and the target score is small, it is judged that it is more efficient to give up the grade in the national language and mathematics and improve the grade in English, and improve the grade in English intensively. The selection of such a teaching method may be determined.
  • control blocks of the information processing apparatus 100 are realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like. It may be realized by software.
  • the information processing device 100 includes a computer that executes the instructions of a program that is software that realizes each function.
  • the computer includes, for example, one or more processors and a computer-readable recording medium that stores the program. Then, in the computer, the processor reads the program from the recording medium and executes the program to achieve the object of the present invention.
  • a CPU Central Processing Unit
  • the recording medium a "non-transitory tangible medium” such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. Further, a RAM (Random Access Memory) for expanding the above program may be further provided.
  • the program may be supplied to the computer via any transmission medium (communication network, broadcast wave, etc.) capable of transmitting the program.
  • any transmission medium communication network, broadcast wave, etc.
  • one aspect of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
  • FIG. 8 is a block diagram showing components of the information processing apparatus according to this embodiment. As shown in FIG. 8, the information processing apparatus 101 is different from the information processing apparatus 100 of the first embodiment in that the control unit 90 includes a visualization information generation unit 91.
  • the information processing apparatus 101 refers to the face information acquisition unit 1 that acquires face information including at least a part of information about a learner's face, and refers to the face information to identify a plurality of visual target objects visually recognized by the learner.
  • the visual identification target specifying unit 2 for specifying the visual identification target specifying unit 2 and the visualization information generation unit 91 for generating the visualized information indicating the visual identification target specified by the visual identification target specifying unit are provided.
  • the visualization information generation unit 91 acquires in time series the result of the visual target specifying unit 2 specifying the visual target.
  • the visualization information generation unit 91 sends the generated visualization information to the display unit 7 and the storage unit 6.
  • the visualized information generated by the visualization information generation unit 91 may include information indicating which visible target object is visually recognized at each time.
  • the visualized information generated by the visualization information generation unit 91 may include information regarding the time when each visual target is visually recognized within a predetermined time.
  • FIG. 9 is a diagram showing an example of visualized information generated by the visualization information generation unit 91.
  • the visualized information is, for example, as shown in (a) of FIG. 9, with the time as the horizontal axis, which shows which question in the test (questions Q1 to Q5) was visually observed at each time. It is a graph of information.
  • the problem Q1 is visually recognized from time t 1 to t 2
  • the problem Q2 is visually recognized from time t 3 to t 4
  • the problem Q3 is visually recognized from time t 5 to t 6
  • the time t 7 is shown.
  • viewing the problem Q4 to ⁇ t 8 it is shown that had been viewing the problem Q5 to time t 9 ⁇ t 10.
  • the visualized information is, for example, as shown in FIG. 9B, information indicating how long each question in the test was visually observed during the total answer time of the test.
  • FIG. 9B shows a pie chart showing the ratio of the time during which each question was visually recognized during the total answer time of the test.
  • the visualized information is time-series information indicating which visual target of the question sentence, question sentence, underlined portion, and option was seen at each time while solving one question
  • the information may be information indicating the proportion of time during which the question sentence, the question sentence, the underlined portion, and the options are visually recognized during the total answer time of one question.
  • An information processing apparatus refers to a face information acquisition unit that acquires face information including information on at least a part of a learner's face, and the face information, and the learner visually recognizes the face information.
  • the instruction determination unit that determines the necessity of instruction or the level of instruction necessity for the learner according to the combination of the visual target specifying unit that specifies a plurality of visual target objects and the visual target specified by the visual target specifying unit. It is characterized by having and.
  • the instruction determining unit determines the instruction necessity or the instruction necessity level for the learner according to the order of visually recognizing the visual target.
  • the visual recognition target object includes at least one of a question sentence and a question sentence.
  • the visual recognition target object includes at least one specific part in the question sentence and the question sentence.
  • the teaching determination unit further refers to the learner information of the learner and determines the guidance necessity or the guidance necessity level for the learner.
  • the instruction determination unit calculates a feature amount regarding the learner by referring to an order in which the visual recognition target object is visually recognized and learner information of the learner.
  • the estimation result calculation unit that calculates the estimation result by referring to the feature amount, the estimation result, and the target information, the necessity or the necessity level of the instruction for the learner is determined. It has a teaching method classification department.
  • An information processing method is a face information acquisition step of acquiring face information including information on at least a part of a learner's face, and the learner visually recognizes the face information with reference to the face information.
  • An information processing program is an information processing program for causing a computer to function as any one of the information processing devices, and includes the face information acquisition unit, the visual recognition target identification unit, and the guidance determination unit. Make the computer function as.
  • An information processing apparatus refers to a face information acquisition unit that acquires face information including information on at least a part of a learner's face, and the face information, and the learner visually recognizes the face information. It is characterized by comprising a visual recognition target specifying unit that specifies a plurality of visual recognition target objects, and a visualization information generation unit that generates visualized information indicating the visual recognition target object specified by the visual recognition target specifying unit.
  • the visualized information includes information indicating which visible target object is visually recognized at each time.
  • the visualized information includes information regarding a time when each visible target object is visually recognized within a predetermined time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention achieves a learning support system that determines whether or not it is necessary to guide a learner. An information processing device (100) comprises: a face information acquisition unit (1) for acquiring the face information of a learner; a viewed-object identification unit (2) for identifying viewed objects viewed by the learner by referring to the face information; and a guidance determination unit (3) for determining whether the learner requires guidance or determines the guidance requirement level of the learner, in accordance with the combination of viewed objects.

Description

情報処理装置及び情報処理方法Information processing apparatus and information processing method
 本発明は、情報処理装置及び情報処理方法に関する。 The present invention relates to an information processing device and an information processing method.
 教育機関等において利用される、学習者の学習を支援するシステムが従来技術として知られている。例えば、特許文献1には、学習者の読み返し頻度及び視線停留をモニタリングした結果を他の学習者と比較して、その比較結果を学習者及び指導者に提示する技術が記載されている。 A system that is used in educational institutions to support learners' learning is known as conventional technology. For example, Patent Literature 1 describes a technique of comparing a result of monitoring a readback frequency and a gaze stop of a learner with other learners and presenting the comparison result to the learner and the instructor.
日本国公開特許公報「特開2005-338173号公報(2005年12月8日公開)」Japanese Patent Laid-Open Publication "Japanese Patent Application Laid-Open No. 2005-338173 (published on December 8, 2005)"
 しかしながら、特許文献1に記載された技術は、学習者の理解度を評価するものであり、学習者に指導が必要か否かは、評価結果に基づいて学習者又は指導者自身が検討する必要がある。 However, the technique described in Patent Document 1 is for evaluating the degree of understanding of the learner, and whether or not the learner needs the instruction needs to be examined by the learner or the instructor himself based on the evaluation result. There is.
 本発明の一態様は、学習者の学習プロセスを可視化して、当該学習者に対する指導要否を判定することで、指導が必要な学習者をシステムにより抽出する学習支援システムを実現することを目的とする。 An aspect of the present invention is to realize a learning support system that visualizes a learning process of a learner and determines necessity of instruction for the learner to extract a learner who needs instruction by the system. And
 上記の課題を解決するために、本発明の一態様に係る情報処理装置は、学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得部と、前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定部と、前記視認対象特定部が特定した視認対象物の組み合わせに応じて、前記学習者に対する指導要否又は指導要否レベルを判定する指導判定部とを備えていることを特徴とする。 In order to solve the above problems, an information processing apparatus according to an aspect of the present invention refers to a face information acquisition unit that acquires face information including information of at least a part of a learner's face, and the face information. , The necessity or guidance of the instruction to the learner according to the combination of the visual identification target specifying unit that identifies a plurality of visual recognition target objects visually recognized by the learner and the visual identification target object specified by the visual recognition target identification unit. It is characterized in that it is provided with a guidance judging unit for judging a necessity level.
 また、前記課題を解決するために、本発明の一態様に係る情報処理方法は、学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得ステップと、前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定ステップと、前記視認対象特定ステップにおいて特定した視認対象物の組み合わせに応じて、前記学習者に対する指導要否又は指導要否レベルを判定する指導判定ステップとを含むことを特徴とする。 In order to solve the above problems, the information processing method according to one aspect of the present invention refers to a face information acquisition step of acquiring face information including information of at least a part of a learner's face, and the face information. Then, depending on the combination of the visual identification target identifying step of identifying the plurality of visual identification objects visually recognized by the learner, and the combination of the visual identification objects identified in the visual identification target identifying step, it is necessary or unnecessary to instruct the learner. And a guidance determining step of determining a guidance necessity level.
 また、前記課題を解決するために、本発明の一態様に係る情報処理プログラムは、前記いずれかの情報処理装置としてコンピュータを機能させるための情報処理プログラムであって、前記顔情報取得部、前記視認対象特定部、及び前記指導判定部としてコンピュータを機能させる。 Further, in order to solve the above problems, an information processing program according to an aspect of the present invention is an information processing program for causing a computer to function as any one of the information processing devices, the face information acquisition unit, A computer is caused to function as the visually recognizable object specifying unit and the guidance judging unit.
 また、前記課題を解決するために、本発明の一態様に係る情報処理装置は、学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得部と、前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定部と、前記視認対象特定部が特定した視認対象物を示す可視化された情報を生成する可視化情報生成部とを備えていることを特徴とする。 Further, in order to solve the above problems, an information processing apparatus according to an aspect of the present invention refers to a face information acquisition unit that acquires face information including information of at least a part of a learner's face, and the face information. Then, the visual recognition target specifying unit that specifies a plurality of visual recognition target objects visually recognized by the learner, and a visualization information generation unit that generates visualized information indicating the visual recognition target object specified by the visual recognition target specifying unit. It is characterized by having.
 本発明の一態様によれば、学習者が視認している視認対象物の組み合わせに応じて、学習者に対する指導要否又は指導要否レベルを判定することができる。 According to one aspect of the present invention, it is possible to determine the necessity of instruction or the level of instruction required for a learner according to the combination of the visual recognition objects visually recognized by the learner.
本発明の一実施形態に係る情報処理装置の構成要素を示すブロック図である。It is a block diagram which shows the component of the information processing apparatus which concerns on one Embodiment of this invention. 本発明の一実施形態に係る情報処理装置が備えた表示部が表示する画面の例を示す図である。It is a figure which shows the example of the screen which the display part with which the information processing apparatus which concerns on one Embodiment of this invention was equipped displays. 本発明の一実施形態に係る情報処理装置の指導判定部が取得するリファレンスデータの一例を示す図である。It is a figure showing an example of reference data which a guidance judgment part of an information processor concerning one embodiment of the present invention acquires. 本発明の一実施形態に係る情報処理装置の指導判定部が出力する指導要否データの一例を示す図である。It is a figure which shows an example of the guidance necessity data which the guidance determination part of the information processing apparatus which concerns on one Embodiment of this invention outputs. 本発明の一実施形態に係る情報処理装置の処理の流れの一例を示すフローチャートである。It is a flow chart which shows an example of a flow of processing of an information processor concerning one embodiment of the present invention. 本発明の一実施形態に係る情報処理装置の指導判定部の要部構成を示すブロック図である。It is a block diagram showing the important section composition of the guidance judgment part of the information processor concerning one embodiment of the present invention. 本発明の一実施形態に係る情報処理装置の指導判定部が取得する目標情報の一例を示す図である。It is a figure which shows an example of the target information which the instruction|indication judgment part of the information processing apparatus which concerns on one Embodiment of this invention acquires. 本発明の他の実施形態に係る情報処理装置の構成要素を示すブロック図である。It is a block diagram which shows the component of the information processing apparatus which concerns on other embodiment of this invention. 本発明の他の実施形態に係る情報処理装置の可視化情報生成部が生成する情報の一例を示す図である。It is a figure which shows an example of the information which the visualization information production|generation part of the information processing apparatus which concerns on other embodiment of this invention produces|generates.
 [第1実施形態]
 〔情報処理装置〕
 以下、本発明の一実施形態について、詳細に説明する。情報処理装置100は、学習者の学習プロセスを可視化して、当該学習者に対する指導要否を判定することで、指導が必要な学習者をシステムにより抽出する学習支援システムである。図1は、本発明の一実施形態に係る情報処理装置100の構成要素を示すブロック図である。図1に示すように、情報処理装置100は、顔情報取得部1、視認対象特定部2、及び指導判定部3を備えている。情報処理装置100は、状態検出部4、学習者情報取得部5、記憶部6、表示部7、及び画像取得部8を備えていてもよい。ここで、視認対象特定部2、指導判定部3、及び状態検出部4は、情報処理装置100の制御部9の一構成として実現される。
[First Embodiment]
[Information processing device]
Hereinafter, an embodiment of the present invention will be described in detail. The information processing apparatus 100 is a learning support system that visualizes a learning process of a learner and determines whether or not guidance is required for the learner, thereby extracting a learner who needs guidance by the system. FIG. 1 is a block diagram showing components of an information processing device 100 according to an embodiment of the present invention. As illustrated in FIG. 1, the information processing device 100 includes a face information acquisition unit 1, a visual identification target identification unit 2, and a guidance determination unit 3. The information processing device 100 may include a state detection unit 4, a learner information acquisition unit 5, a storage unit 6, a display unit 7, and an image acquisition unit 8. Here, the visual recognition target identification unit 2, the guidance determination unit 3, and the state detection unit 4 are realized as one configuration of the control unit 9 of the information processing device 100.
 このような構成により、情報処理装置100は、学習者の学習プロセスを可視化して、当該学習者に対する指導要否を判定することができる。 With such a configuration, the information processing apparatus 100 can visualize the learning process of the learner and determine whether the learner needs the instruction.
 情報処理装置100は、一例として、ローカルネットワーク又はグローバルネットワークに接続可能な端末装置(例えば、スマートフォン、タブレット端末、パソコン、テレビジョン受像機等)に実装される。 The information processing apparatus 100 is mounted on, for example, a terminal device (for example, a smartphone, a tablet terminal, a personal computer, a television receiver, etc.) that can be connected to a local network or a global network.
 (顔情報取得部)
 顔情報取得部1は、例えば、カメラ等の撮像部から取得した画像から、学習者の顔情報を取得する。学習者の顔情報とは、顔の特徴量を示す情報である。顔の特徴量とは、例えば、顔の各部位(例えば、目、鼻、口及び眉等)の位置を示す位置情報、形状を示す形状情報及び大きさを示す大きさ情報等を指す。特に、目の情報からは、学習者が注視する対象に対する学習者の興味および関心の度合い等を評価することができるため、特に有用である。目の情報としては、例えば目頭および目尻の端点、虹彩および瞳孔等のエッジ等が挙げられる。また、顔情報取得部1は、撮像部から取得した画像に、ノイズ低減、エッジ強調等の補正処理を適宜行ってもよい。顔情報取得部1は、抽出した顔情報を状態検出部4に送信する。
(Face information acquisition unit)
The face information acquisition unit 1 acquires the learner's face information from an image acquired from an imaging unit such as a camera, for example. The learner's face information is information indicating the face feature amount. The feature amount of the face indicates, for example, position information indicating the position of each part of the face (for example, eyes, nose, mouth, eyebrows, etc.), shape information indicating the shape, size information indicating the size, and the like. In particular, it is particularly useful because it is possible to evaluate the learner's interest, the degree of interest, and the like from the eye information to the object the learner is gazing. The eye information includes, for example, the corner points of the inner and outer corners of the eye, the edges of the iris and the pupil, and the like. In addition, the face information acquisition unit 1 may appropriately perform correction processing such as noise reduction and edge enhancement on the image acquired from the imaging unit. The face information acquisition unit 1 transmits the extracted face information to the state detection unit 4.
 (状態検出部)
 状態検出部4は、顔情報取得部1が抽出した顔情報に基づき、学習者の状態を検出する。状態検出部4は、学習者の状態を検出した後、該検出結果に関する情報を視認対象特定部2へ送信する。状態検出部4が検出する上記学習者の状態は、例えば、学習者の顔の各部位の状態であり、上記学習者の視線、瞳孔の状態、瞬きの回数、眉の動き、頬の動き、瞼の動き、唇の動きおよび顎の動きのうち少なくとも1つである。
(Status detector)
The state detection unit 4 detects the state of the learner based on the face information extracted by the face information acquisition unit 1. After detecting the learner's state, the state detection unit 4 transmits information about the detection result to the visual recognition target identification unit 2. The state of the learner detected by the state detection unit 4 is, for example, the state of each part of the face of the learner, and the line of sight of the learner, the state of the pupil, the number of blinks, the movement of the eyebrows, the movement of the cheek, At least one of eyelid movement, lip movement, and jaw movement.
 状態検出部4は、一例として、顔情報取得部1が取得した顔の特徴量である顔の各部位(例えば、目、鼻、口、頬および眉等)の位置を示す位置情報、形状を示す形状情報および大きさを示す大きさ情報等を参照し、学習者の状態として、例えば、上記学習者の視線、瞳孔の状態、瞬きの回数、眉の動き、頬の動き、瞼の動き、唇の動きおよび顎の動きのうち少なくとも1つを検出してもよい。 As an example, the state detection unit 4 calculates the position information and the shape indicating the position of each part of the face (for example, eyes, nose, mouth, cheeks, eyebrows, etc.) which is the feature amount of the face acquired by the face information acquisition unit 1. Referring to the shape information and the size information indicating the size, etc., as the state of the learner, for example, the learner's line of sight, the state of the pupil, the number of blinks, eyebrow movement, cheek movement, eyelid movement, At least one of lip movement and jaw movement may be detected.
 このように、状態検出部4を備えることで、情報処理装置100は、学習者の状態を好適に検出することできる。 By thus providing the state detection unit 4, the information processing apparatus 100 can preferably detect the state of the learner.
 視線の検出方法としては、特に限定されないが、例えば、情報処理装置100に、点光源(不図示)を設け、点光源からの光の角膜反射像を撮像部で所定時間撮影することにより、学習者の視線の移動先を検出する方法が挙げられる。点光源の種類は特に限定されず、可視光、赤外光が挙げられるが、例えば赤外線LEDを用いることで、学習者に不快感を与えることなく、視線の検出をすることができる。視線の検出において、視線が所定時間以上移動しない場合は、同じ場所を注視しているといえる。 The method for detecting the line of sight is not particularly limited, but for example, by providing a point light source (not shown) in the information processing apparatus 100 and capturing a corneal reflection image of light from the point light source for a predetermined time by the image capturing unit, learning is performed. There is a method of detecting the moving destination of the line of sight of the person. The type of point light source is not particularly limited, and examples thereof include visible light and infrared light. By using, for example, an infrared LED, it is possible to detect the line of sight without making the learner feel uncomfortable. When detecting the line of sight, if the line of sight does not move for a predetermined time or more, it can be said that the user is gazing at the same place.
 また、瞳孔の状態を検出する方法としては、特に限定されないが、例えば、ハフ変換を利用して、目の画像から円形の瞳孔を検出する方法等が挙げられる。一般的に、人間は、集中している場合に開瞳する傾向にあるため、瞳孔のサイズを検出することで、学習者の集中の度合いを評価することができる。例えば、瞳孔のサイズを所定時間検出し、所定時間内で瞳孔が大きくなっている時間は、学習者がある対象を注視している可能性が高いといえる。瞳孔のサイズに関して、閾値を設定し、瞳孔のサイズが閾値以上である場合は「開」、瞳孔のサイズが閾値未満である場合は「閉」として評価してもよい。 Also, the method of detecting the state of the pupil is not particularly limited, but for example, a method of detecting a circular pupil from the image of the eye by using Hough transform can be cited. In general, humans tend to open their eyes when they are concentrated. Therefore, by detecting the size of the pupil, the degree of concentration of the learner can be evaluated. For example, it can be said that there is a high possibility that the learner is gazing at an object during the time when the size of the pupil is detected for a predetermined time and the pupil is growing within the predetermined time. With regard to the size of the pupil, a threshold may be set, and when the size of the pupil is equal to or larger than the threshold, it may be evaluated as “open”, and when the size of the pupil is smaller than the threshold, evaluation may be made as “closed”.
 また、瞬きの回数を検出する方法としては、特に限定されないが、例えば、赤外光を学習者の目に対して照射し、開眼時と、閉眼時との赤外光量反射量の差を検出する方法等が挙げられる。一般的に、人間は、集中している場合、低い頻度で安定した間隔で瞬きをする傾向にあるため、瞬きの回数を検出することで、学習者の集中度を評価することができる。例えば、瞬きの回数を所定時間検出し、所定時間内で瞬きが安定した間隔で行われている場合、学習者がある対象を注視している可能性が高いといえる。 The method of detecting the number of blinks is not particularly limited, for example, by irradiating the learner's eyes with infrared light, to detect the difference in the amount of infrared light reflection between when the eyes are open and when the eyes are closed. And the like. Generally, when a person is concentrating, he or she tends to blink at a stable interval at a low frequency, so that the degree of concentration of the learner can be evaluated by detecting the number of blinks. For example, if the number of blinks is detected for a predetermined time and the blinks are performed at stable intervals within the predetermined time, it can be said that the learner is likely to be gazing at a target.
 状態検出部4は、学習者の視線、瞳孔の状態および瞬きの回数、眉の動き、瞼の動き、頬の動き、鼻の動き、唇の動きおよび顎の動きのうち少なくとも1つを検出すればよいが、これらを組み合わせることが好ましい。このように検出方法を組み合わせることで、状態検出部4は、ある対象物を視認しているときの学習者の集中度を好適に評価することができる。 The state detection unit 4 detects at least one of the learner's line of sight, the state of the pupil and the number of blinks, eyebrow movement, eyelid movement, cheek movement, nose movement, lip movement, and jaw movement. However, it is preferable to combine these. By combining the detection methods in this way, the state detection unit 4 can appropriately evaluate the degree of concentration of the learner while visually recognizing a certain object.
 目の状態以外では、例えば、眉の内側を持ち上げるか、外側を上げるか等の眉の動き、上瞼を上げる、瞼を緊張させる等の瞼の動き、鼻に皺を寄せる等の鼻の動き、上唇を持ち上げる、唇をすぼめる等の唇の動き、頬を持ち上げる等の頬の動き、顎を下げる等の顎の動き等の顔の各部位の状態が挙げられる。学習者の状態として、顔の複数の部位の状態を組み合わせてもよい。 Other than the eye condition, for example, the movement of the eyebrow such as lifting the inside of the eyebrow or raising the outside of the eyebrow, the movement of the eyelid such as raising the upper eyelid and the tension of the eyelid, and the movement of the nose such as wrinkling the nose. , The state of each part of the face such as the movement of the lips such as lifting the upper lip and the purging of the lips, the movement of the cheeks such as lifting the cheeks, the movement of the jaws such as lowering the chin. As the state of the learner, the states of a plurality of parts of the face may be combined.
 (視認対象特定部)
 視認対象特定部2は、状態検出部4から取得した検出結果に基づき、学習者が視認している視認対象物を特定する。視認対象特定部2の具体的な処理について、図2を用いて説明する。図2は、情報処理装置100が備えた表示部7が表示する画面の例を示す図である。視認対象物には、設問文及び問題文の少なくとも一方が含まれる。また、視認対象物には、設問文中及び問題文中の少なくとも一方の特定の部位が含まれる。ここで特定の部位には、設問文中又は問題文中に付された下線部が含まれる。図2に示す例において、学習者が視認している視認対象物は、問題文、設問文、下線部、及び選択肢である。
(Visual target specifying part)
The visual recognition target identification unit 2 identifies the visual recognition target object visually recognized by the learner based on the detection result acquired from the state detection unit 4. Specific processing of the visual recognition target specifying unit 2 will be described with reference to FIG. FIG. 2 is a diagram illustrating an example of a screen displayed by the display unit 7 included in the information processing apparatus 100. At least one of a question sentence and a question sentence is contained in a visual recognition target object. The visual recognition target object includes at least one specific part in the question sentence and the question sentence. Here, the specific part includes an underlined portion added to the question sentence or the question sentence. In the example shown in FIG. 2, the visual recognition target objects visually recognized by the learner are a question sentence, a question sentence, an underlined portion, and an option.
 図2に示すように、表示部7には、例えば、問題文表示領域10、設問文表示領域11、及び選択肢表示領域12が含まれている。そして、問題文表示領域10には、下線が表示された下線部領域13が含まれている。視認対象特定部2は、まず、これらの領域の表示画面上の座標を、後述する画像取得部8から取得する。 As shown in FIG. 2, the display unit 7 includes, for example, a question sentence display area 10, a question sentence display area 11, and an option display area 12. Then, the question sentence display area 10 includes an underlined area 13 in which an underline is displayed. The visual target specifying unit 2 first acquires the coordinates of these areas on the display screen from the image acquiring unit 8 described later.
 視認対象特定部2は、取得した各領域の座標と、状態検出部4から取得した視線情報とを照合して、学習者の視線位置の座標が、どの領域の座標内にあるかを特定する。例えば、図2に示されるように、学習者の視線位置14の座標が、問題文表示領域10の下線部領域13の座標内にあると特定された場合、視認対象特定部2は、学習者の視線位置の座標が、下線部領域13の座標内に留まっている時間のカウントを開始する。このように、視認対象特定部2は、学習者の視線位置の座標と、表示部7の各領域の座標とを照合することで、学習者がどの視認対象物を視認しているかを特定する。また、視認対象特定部2は、表示画面に一の問題が表示されている間継続して視認対象物の特定を行うことで、学習者がその問題に関して視認した視認対象物の組み合わせを特定することができる。 The visual recognition target identification unit 2 collates the acquired coordinates of each region with the line-of-sight information acquired from the state detection unit 4, and identifies which region the coordinates of the learner's line-of-sight position are within. .. For example, as shown in FIG. 2, when the coordinates of the learner's line-of-sight position 14 is specified to be within the coordinates of the underlined area 13 of the question sentence display area 10, the visual recognition target specifying unit 2 determines that the learner is The counting of the time in which the coordinates of the line-of-sight position of stays within the coordinates of the underlined area 13 is started. In this way, the visual recognition target identification unit 2 identifies which visual recognition target object the learner is visually recognizing by collating the coordinates of the gaze position of the learner with the coordinates of each area of the display unit 7. .. Further, the visual target specifying unit 2 continuously specifies the visual target while the one problem is displayed on the display screen, and thereby specifies the combination of the visual target visually recognized by the learner regarding the problem. be able to.
 また、視線の情報に加えて、(状態検出部)で記載したように、瞳孔の状態、瞬きの回数、眉の動き、瞼の動き、頬の動き、鼻の動き、唇の動きおよび顎の動きの検出結果を参照することで、学習者がどの領域を集中して視認しているかをさらに好適に特定することができる。 In addition to the information on the line of sight, as described in (State Detection Unit), the state of the pupil, the number of blinks, eyebrow movement, eyelid movement, cheek movement, nose movement, lip movement, and jaw movement. By referring to the motion detection result, it is possible to more appropriately specify which region the learner is concentrating and visually recognizing.
 視認対象特定部2は、特定した視認対象物に関する情報を指導判定部3へ送信する。 The visual target specifying unit 2 transmits information regarding the specified visual target to the guidance determining unit 3.
 (指導判定部)
 指導判定部3は、視認対象特定部2が特定した、学習者が視認している視認対象物の組み合わせに応じて、学習者に対する指導要否又は指導要否レベルを判定する。
(Teaching judgment section)
The instruction determination unit 3 determines the instruction necessity or the instruction necessity level for the learner according to the combination of the visual target objects visually recognized by the learner, which are identified by the visual target specifying unit 2.
 学習者が図2に示すような問題に解答する場合、問題文を読んだ上で設問文を読んでから解答する方法や、まず設問文を読んでから問題文を読んで解答する方法のように、学習者が解答を得るまでの学習プロセスは様々である。また、学習者は、問題文又は設問文中に下線が付された箇所を重要箇所と認識し、問題文又は設問文の全体を読む前に下線が付された箇所を読む場合もあり、さらに、選択肢を選択して解答する選択式の問題では、選択肢から読むことも考えられる。また、学習者が問題文を読まずに選択肢を選択するような場合も考えられる。 When the learner answers the question as shown in Fig. 2, there is a method of reading the question sentence and then the question sentence before answering, or a method of reading the question sentence first and then reading the question sentence before answering. In addition, there are various learning processes until the learner gets an answer. In addition, the learner may recognize an underlined portion in the question sentence or question sentence as an important portion, and may read the underlined portion before reading the entire question sentence or question sentence. In multiple choice questions that select and answer alternatives, reading from alternatives can be considered. It is also possible that the learner selects an option without reading the question sentence.
 このように、学習者による学習プロセスは様々であるので、例えば、望ましくない学習プロセスであれば指導の必要度が高く、望ましい学習プロセスであれば指導の必要度が低い、というように、指導者は学習プロセスに応じて指導の要否や指導方法を判断する必要がある。情報処理装置100は、学習者が視認する視認対象物を特定することで学習者の学習プロセスを可視化し、視認対象物の組み合わせに応じて、学習者に対する指導要否又は指導要否レベルを判定する。このように、学習者が視認する視認対象物の組み合わせに応じて指導要否又は指導要否レベルを判定することで、指導が必要な学習者をシステムにより抽出し、学習支援することができる。 As described above, since the learning process by the learner is various, for example, the instruction need is high for an undesirable learning process, and the instruction is low for a desirable learning process. Needs to judge the necessity of teaching and the teaching method according to the learning process. The information processing device 100 visualizes the learning process of the learner by identifying the visual target object visually recognized by the learner, and determines the necessity of instruction or the level of instruction necessity for the learner according to the combination of the visual target objects. To do. As described above, by determining the necessity of instruction or the level of instruction necessity according to the combination of the visual recognition objects visually recognized by the learner, the learner needing the instruction can be extracted by the system to support the learning.
 指導判定部3は、視認対象特定部2が特定した視認対象物の組み合わせに応じて、学習者に対する指導要否又は指導要否レベルを判定する。一例として、指導判定部3は、情報処理装置100に予め記憶されたリファレンスデータを参照して、指導要否又は指導要否レベルを判定する。また、指導判定部3は、予め定められたルールに従って指導要否又は指導要否レベルを判定してもよい。 The instruction determination unit 3 determines the necessity of instruction or the level of instruction required for the learner according to the combination of the visual target objects identified by the visual target identification unit 2. As an example, the instruction determination unit 3 refers to reference data stored in advance in the information processing device 100 to determine the instruction necessity or the instruction necessity level. Further, the guidance determination unit 3 may determine the guidance necessity or the guidance necessity level according to a predetermined rule.
 指導判定部3がリファレンスデータを参照して、指導要否又は指導要否レベルを判定する態様について説明する。図3は、指導判定部3が取得するリファレンスデータの一例を示す図である。リファレンスデータは、記憶部6に格納されている。図3において、「科目」は、学習者が学習する科目であり、「レベル」は、学習者の指導要否レベルであり、「性格」は、学習者の性格であり、「順列」は、視認対象物の組み合わせ順を示している。 The manner in which the guidance determination unit 3 refers to the reference data and determines the guidance necessity or the guidance necessity level will be described. FIG. 3 is a diagram showing an example of reference data acquired by the guidance determination unit 3. The reference data is stored in the storage unit 6. In FIG. 3, “subject” is a subject learned by the learner, “level” is the level of necessity of learning by the learner, “personality” is the personality of the learner, and “permutation” is The combination order of the visually recognized objects is shown.
 学習する科目毎に、指導の必要度が高い又は低い学習プロセスは異なり、指導要否レベルも異なることが考えられる。したがって、図3に示すリファレンスデータは、科目毎のデータを含んでいる。同じ学習プロセスについて、科目毎に指導要否レベルが相違しない場合には、リファレンスデータは、科目の項目を含まなくてもよい。 Depending on the subject to be studied, the learning process with high or low instructional needs may differ, and the level of instructional need may differ. Therefore, the reference data shown in FIG. 3 includes data for each subject. In the same learning process, if the instruction necessity level does not differ for each subject, the reference data may not include the item of the subject.
 「指導要否レベル」は、学習者の学習プロセスに応じた、学習者に対する指導の必要度を示している。例えば、学習者に対する指導の必要度を5段階で示し、指導要否レベル1が最も指導の必要度が低く、指導要否レベル5が最も指導の必要度が高くなるように設定する。なお、蓄積された指導判定部3の判定結果をフィードバックして、順列に関連付けられた指導要否レベルを更新してもよい。なお、リファレンスデータは、「指導要否レベル」の替わりに、指導要か指導否かの指導要否のみを表し、そのレベル分けをしていない「指導要否」の項目を含んでもよい。 “The level of instruction requirement” indicates the degree of instruction required for the learner according to the learning process of the learner. For example, the necessity level of guidance for the learner is shown in five stages, and the necessity level of guidance is set to be lowest at level 1 of necessity of guidance, and the level of necessity of guidance is set to be highest at level 5 of necessity of guidance. It should be noted that the accumulated determination result of the instruction determination unit 3 may be fed back to update the instruction necessity level associated with the permutation. It should be noted that the reference data may include, instead of the “guidance necessity level”, only the guidance necessity of the guidance necessity or the guidance necessity, and may include an item of the “guidance necessity” which is not divided into levels.
 「性格」の項目は、学習者の性格の特徴に応じてグループ分けしたグループを表している。指導判定部3は、学習者の学習者情報をさらに参照して、当該学習者に対する指導要否又は指導要否レベルを判定する。学習者の性格により、望ましい又は望ましくない学習プロセスは異なることが考えられる。したがって、図3に示すリファレンスデータは、学習者の性格毎のデータを含んでいる。例えば、几帳面な性格の学習者のグループAに属する学習者と、注意力散漫な性格の学習者のグループBに属する学習者とで、学習プロセスに応じた指導の必要度が異なり得る。 The “Personality” item represents groups divided into groups according to the characteristics of the learner's personality. The instruction determination unit 3 further refers to the learner information of the learner to determine the instruction necessity or the instruction necessity level for the learner. Depending on the learner's personality, desirable or undesirable learning processes may differ. Therefore, the reference data shown in FIG. 3 includes data for each personality of the learner. For example, learners belonging to the group A of learners having a discreet personality and learners belonging to the group B of learners having a distracted personality may have different degrees of instructional needs according to the learning process.
 図3を参照してより具体的に説明すれば、設問文-問題文-下線部-選択肢の順に見て学習した場合に、性格Aの学習者は指導要否レベル1であり、指導の必要度は低いが、性格Bの学習者は指導要否レベル3であり、指導の必要度はより高いというように、同じ学習プロセスであっても学習者の性格に応じて指導要否及び指導要否レベルが異なる。なお、指導要否レベルの判定において学習者の性格を考慮しない場合や、指導要否レベルが学習者の性格により影響を受けない場合には、リファレンスデータは、性格の項目を含まなくてもよい。 More specifically, referring to FIG. 3, the learner with the personality A is at the level 1 of the necessity of instruction and needs the instruction when learning is performed in the order of question sentence-problem sentence-underlined part-option. Although the degree is low, learners with personality B are at the level 3 of necessity of instruction, and the need for instruction is higher. Even in the same learning process, the necessity of instruction and the necessity of instruction are determined according to the character of the learner. Fail level is different. If the learner's personality is not considered in the determination of the instruction necessity level, or if the instruction need level is not affected by the learner's personality, the reference data may not include the personality item. ..
 「順列」は、視認対象物の組み合わせを、視認した順番の違いを区別して並べたデータである。なお、リファレンスデータは、「順列」の替わりに、視認対象物の組み合わせを表し、その視認した順番の違いを区別していない「組み合わせ」の項目を含んでもよい。 “Permutation” is data in which the combinations of visible objects are arranged by distinguishing the difference in the visual order. Note that the reference data may include a “combination” item that represents a combination of visual recognition target objects and does not distinguish the difference in the visually recognized order, instead of the “permutation”.
 視認対象物として、設問文、問題文、下線部、及び選択肢の4つがある場合に、「順列」は、この4つの視認対象物からなるデータのみであってもよいし、視認対象物のいくつかを省略して、視認対象物の一部のみを含むデータであってもよい。例えば、「順列」は、設問文、問題文、及び選択肢のみを見て、下線部は見ない場合のように、下線部が省略された順列であってもよい。また、「順列」は、視認対象物のいくつかが重複したデータであってもよい。例えば、「順列」は、問題文を見た後に設問文を見て、さらにもう一度問題文を見る場合のように、問題文が重複した順列であってもよい。 When there are four question sentences, question sentences, underlined parts, and options as the visual target, the “permutation” may be only the data consisting of these four visual target objects, or the number of the visual target objects. It may be omitted, and the data may include only a part of the visual recognition target. For example, the “permutation” may be a permutation in which the underlined part is omitted, as in the case where only the question sentence, the question sentence, and the options are viewed and the underlined part is not seen. Further, the “permutation” may be data in which some of the visual recognition target objects are duplicated. For example, the “permutation” may be a permutation in which the question sentences are duplicated, as in the case of looking at the question sentence after seeing the question sentence and then looking at the question sentence again.
 次に、リファレンスデータを参照した指導判定部3の判定について説明する。図4は、指導判定部3が出力する指導要否データの一例を示す図である。図4において、「科目」は、学習者が学習した科目であり、「問題」は、学習者が学習した問題であり、「ID」は、学習者を識別するための情報であり、「性格」は、学習者の性格であり、「順列」は、視認対象特定部2が特定した視認対象物の順列であり、「指導要否レベル」は、指導判定部3が判定した指導要否レベルを示している。 Next, the judgment of the guidance judgment unit 3 referring to the reference data will be explained. FIG. 4 is a diagram illustrating an example of guidance necessity data output by the guidance determination unit 3. In FIG. 4, "subject" is a subject learned by the learner, "problem" is a problem learned by the learner, "ID" is information for identifying the learner, and "personality" "Is the personality of the learner, "Permutation" is the permutation of the visual recognition target specified by the visual recognition target specifying unit 2, and "Teaching necessity level" is the guidance necessity level determined by the guidance determining unit 3. Is shown.
 指導判定部3は、問題毎に各学習者の指導要否レベルを判定する。例えば、図4に示す指導要否データには、問題0001に対して、学習者0001、0002、及び0003それぞれについて、指導要否レベルを判定した結果が示されている。 The instruction determination unit 3 determines the instruction necessity level of each learner for each problem. For example, the instruction necessity data shown in FIG. 4 shows the result of determining the instruction necessity level for each of the learners 0001, 0002, and 0003 for the problem 0001.
 図4に示す指導要否データの「性格」及び「順列」の項目は、図3に示すリファレンスデータの「性格」及び「順列」の項目に紐付いている。すなわち、指導判定部3は、学習者情報取得部5から取得した学習者情報から、学習者0001の性格(A)を特定し、視認対象特定部2から取得した視認対象物に関する情報から、学習者0001が問題0001を学習したときの順列(下線部-選択肢-問題文-設問文)を特定し、記憶部6から取得したリファレンスデータを参照し、特定した性格及び順列が紐付いているデータを抽出して、指導要否レベルを5と判定する。 The items of “personality” and “permutation” of the guidance necessity data shown in FIG. 4 are linked to the items of “personality” and “permutation” of the reference data shown in FIG. That is, the teaching determination unit 3 identifies the personality (A) of the learner 0001 from the learner information acquired from the learner information acquisition unit 5, and learns from the information regarding the visual recognition target object acquired from the visual recognition target specification unit 2. Person 0001 specifies the permutation (underlined part-choice-question sentence-question sentence) when the person 0001 learned question 0001, refers to the reference data acquired from the storage unit 6, and finds the data with which the specified personality and permutation are associated. It is extracted and the level of instruction necessity is determined to be 5.
 指導判定部3は、同様に、学習者0002について、性格(B)及び順列(問題文-設問文-下線部-選択肢)を特定し、リファレンスデータを参照して、指導要否レベルを2と判定する。また、指導判定部3は、学習者0003について、性格(A)及び順列(設問文-問題文-下線部-選択肢)を特定し、リファレンスデータを参照して、指導要否レベルを1と判定する。 Similarly, the teaching determination unit 3 specifies the personality (B) and permutation (question sentence-question sentence-underlined portion-choice) for the learner 0002, refers to the reference data, and sets the teaching necessity level to 2. judge. In addition, the teaching determination unit 3 specifies the personality (A) and the permutation (question sentence-question sentence-underlined portion-choice) for the learner 0003, refers to the reference data, and determines the teaching necessity level to be 1. To do.
 このように、指導判定部3は、リファレンスデータを参照し、視認対象物を視認した順番に応じて、指導要否又は指導要否レベルを判定する。 In this way, the instruction determination unit 3 refers to the reference data and determines the instruction necessity or the instruction necessity level according to the order in which the visually recognized objects are visually recognized.
 次に、予め定められたルールに従った指導判定部3の判定について説明する。予め定められたルールは、例えば、最初に下線部を見た場合は-10点、問題文を最初に見た場合は-5点のような、減点方式又は加点方式のルールである。そして、このルールでは、点数が-10点以上であれば、指導要否レベルが5で最も指導の必要度が高く、点数が0点であれば、指導要否レベルが1で指導の必要度が最も低いことも予め定められている。 Next, the judgment of the guidance judgment unit 3 according to the predetermined rule will be described. The predetermined rule is, for example, a point-decreasing method or a point-adding method rule such as -10 points when the underlined portion is first seen and -5 points when the question sentence is first seen. According to this rule, if the score is -10 or more, the level of instruction need is 5 with the highest need for instruction, and if the score is 0, the level of instruction is 1 with the need for instruction. Is also predetermined.
 したがって、指導判定部3は、学習者情報取得部5が特定した学習者について、視認対象特定部2が特定した視認対象物の順列に対して、記憶部6から取得した予め定められたルールを適用して採点し、採点結果に基づいて指導要否レベルを判定する。例えば、指導判定部3は、学習者0001について、下線部を最初に見ているので点数を-10点とし、指導要否レベル5と判定する。 Therefore, for the learner identified by the learner information acquisition unit 5, the teaching determination unit 3 applies the predetermined rule acquired from the storage unit 6 to the permutation of the visual recognition target identified by the visual recognition target identification unit 2. It is applied and graded, and the level of instruction necessity is determined based on the graded result. For example, the teaching determination unit 3 sets the score of -10 for the learner 0001 because the underlined portion is first seen, and determines that the level is 5 for the necessity of teaching.
 指導判定部3は、例えば、以下の事項をさらに判定して、指導要否又は指導要否レベルを判定してもよい。 The guidance determination unit 3 may further determine the following items to determine the guidance necessity or the guidance necessity level, for example.
 問題文及び解答の選択肢の文全体を一読するのに最低限必要であるとされる閾値の時間より短い時間で解答した場合は、そもそも学習者が問題文及び選択肢の文を読まずに解答した可能性が高く、指導の必要度がより高いと言える。この場合、指導判定部3は、指導要否レベルを上げる等の処理を行ってもよい。 When answering in a time shorter than the minimum time required to read the entire sentence of question and answer options, the learner answered without reading the question sentence and option sentence It is more likely that there is a greater need for guidance. In this case, the guidance determination unit 3 may perform processing such as raising the guidance necessity level.
 (記憶部)
 記憶部6には、制御部9を動作させるためのプログラム、及び、指導判定部3が参照するリファレンスデータや他の各種データが格納されている。
(Storage unit)
The storage unit 6 stores a program for operating the control unit 9 and reference data and various other data referred to by the guidance determination unit 3.
 (学習者情報取得部)
 学習者情報取得部5は、例えば、データベース等に予め登録されている特定の学習者の学習者情報を取得し、取得した学習者情報を指導判定部3に送信する。一例として、学習者情報には、学習者の属性を示す属性情報と、他の学習者と対象学習者とを識別するための学習者識別情報が含まれる。学習者の属性情報とは、例えば、学習者の年齢、性別、性格(几帳面、注意力散漫等)等である。また、学習者識別情報とは、例えば、学習者のID、学習者のメールアドレス等である。
(Learner information acquisition section)
The learner information acquisition unit 5 acquires, for example, learner information of a specific learner registered in advance in a database or the like, and transmits the acquired learner information to the teaching determination unit 3. As an example, the learner information includes attribute information indicating learner attributes and learner identification information for distinguishing other learners from the target learner. The learner's attribute information is, for example, the learner's age, sex, personality (generally written, distracted, etc.). The learner identification information is, for example, the learner's ID, the learner's email address, or the like.
 このように、学習者情報に、学習者識別情報が含まれることで、他の学習者と対象学習者とを識別することができ、学習者の指導要否又は指導要否レベルの判定結果と、学習者情報とを紐づけることができる。 As described above, the learner information includes the learner identification information, so that it is possible to distinguish the other learner from the target learner, and it is possible to determine whether or not the learner needs the instruction or the instruction necessity level. , And learner information can be linked.
 (表示部)
 表示部7は、学習者が視認する視認対象物を表示する。また、表示部7は、指導判定部3による指導要否又は指導要否レベルの判定結果を表示してもよい。本実施形態においては、情報処理装置100が表示部7を備える構成を例として説明したが、情報処理装置100とは別体に表示部を構成してもよい。
(Display)
The display unit 7 displays the visual target object visually recognized by the learner. Further, the display unit 7 may display the determination result of the guidance necessity or the guidance necessity level by the guidance determination unit 3. In the present embodiment, the configuration in which the information processing device 100 includes the display unit 7 has been described as an example, but the display unit may be configured separately from the information processing device 100.
 表示部7が、指導判定部3による指導要否又は指導要否レベルの判定結果を表示する場合、図4に示すような、指導判定部3が出力した指導要否データをそのまま表示してもよいし、指導要否データを可視化したオブジェクトを含む表示データを表示してもよい。表示データの一例として、グラフが挙げられる。表示データは、1人の学習者の指導要否レベルを示していてもよいし、複数の学習者の平均の指導要否レベル等を併せて表示してもよい。これにより、学習者の指導要否レベルを一目で把握することができ、学習者の指導要否レベルを可視化することができる。 When the display unit 7 displays the judgment result of the guidance necessity or the guidance necessity level by the guidance determination unit 3, even if the guidance necessity data output by the guidance determination unit 3 as shown in FIG. 4 is displayed as it is. Alternatively, display data including an object in which the guidance necessity data is visualized may be displayed. A graph is an example of the display data. The display data may indicate the instruction necessity level of one learner, or may display the average instruction need level of a plurality of learners together. Thereby, the level of instruction need of the learner can be grasped at a glance and the level of instruction need of the learner can be visualized.
 また、表示部7は、指導判定部3による指導要否又は指導要否レベルと共に、指導要否又は指導要否レベルに基づく提案情報を表示してもよい。提案情報とは、例えば、指導要否レベルが高い問題について、指導を促す情報や、指導方法を提案する情報である。提案情報は、具体的には、「英文法の関係代名詞についての指導が必要です。」、「教科書8ページから10ページについて再度説明してください。」等のメッセージであってもよい。 Further, the display unit 7 may display the guidance necessity or the guidance necessity level by the guidance determination unit 3 and the suggestion information based on the guidance necessity or the guidance necessity level. Proposal information is, for example, information for prompting guidance or information for proposing a guidance method for a problem with a high guidance necessity level. Specifically, the suggestion information may be a message such as "It is necessary to teach about relative pronouns in English grammar.", "Please explain about pages 8 to 10 of the textbook again."
 (画像取得部)
 情報処理装置100は、カメラなどの撮像装置から、学習者の視野を含む撮像画像を取得する画像取得部8をさらに備えてもよい。ここで、撮像装置は、学習者の状態を検出できるものであれば特に限定されないが、例えば、メガネ型のウェアラブルデバイスに搭載されたカメラ、ヘッドマウントデバイスに搭載されたカメラ等が挙げられる。これらのデバイスの種類としては、透過型でもよいし、非透過型でもよい。非透過型の場合、一例として、カメラ(撮像装置)と、学習者が視認している先のディスプレイ(表示部)とが同じ装置に備えられる構成であってもよい。
(Image acquisition unit)
The information processing apparatus 100 may further include an image acquisition unit 8 that acquires a captured image including the field of view of the learner from an imaging device such as a camera. Here, the imaging device is not particularly limited as long as it can detect the state of the learner, and examples thereof include a camera mounted on a glasses-type wearable device and a camera mounted on a head-mounted device. The types of these devices may be transmissive or non-transmissive. In the case of the non-transmissive type, for example, the camera (imaging device) and the display (display unit) to which the learner is visually recognizing may be provided in the same device.
 学習者の視野を含む撮像画像とは、例えば、学習者が見ている先の景色およびディスプレイ(表示部)等を撮影した画像である。 The captured image including the learner's field of view is, for example, an image of the scenery in front of the learner, a display (display unit), and the like.
 視認対象特定部2は、画像取得部8が取得した撮像画像と、上記状態検出部4が検出した結果と、を参照して、学習者が視認している視認対象物を特定してもよい。具体的には、視認対象特定部2は、撮像画像領域の座標と、状態検出部4が検出した学習者の視線情報とを照合し、学習者の視線位置の座標が、撮像画像中のどの座標に対応するかを判定する。視認対象特定部2は、画像取得部8が取得した撮像画像に含まれる対象物を、例えば、機械学習を用いて認識させる構成としてもよい。 The visual recognition target identification unit 2 may identify the visual recognition target object visually recognized by the learner with reference to the captured image acquired by the image acquisition unit 8 and the result detected by the state detection unit 4. .. Specifically, the visual recognition target identification unit 2 collates the coordinates of the captured image area with the learner's line-of-sight information detected by the state detection unit 4, and the coordinates of the learner's line-of-sight position are determined in the captured image. Determine if it corresponds to the coordinates. The visual recognition target specifying unit 2 may be configured to recognize the target object included in the captured image acquired by the image acquiring unit 8 using, for example, machine learning.
 これにより、学習者が視認している視認対象物を好適に特定することができる。 With this, it is possible to suitably identify the visual target that the learner is visually recognizing.
 (制御部)
 制御部9は、情報処理装置100の各部を統括的に制御するものである。制御部9は、視認対象特定部2、指導判定部3、及び状態検出部4を含むものである。制御部9の各機能、及び情報処理装置100に含まれる全ての機能は、例えば記憶部6等に記憶されたプログラムを、CPUが実行することによって実現されてもよい。
(Control unit)
The control unit 9 centrally controls each unit of the information processing device 100. The control unit 9 includes the visual identification target specifying unit 2, the guidance determining unit 3, and the state detecting unit 4. Each function of the control unit 9 and all the functions included in the information processing apparatus 100 may be realized by the CPU executing a program stored in the storage unit 6 or the like, for example.
 〔情報処理装置の処理例〕
 次に、図5のフローチャートに基づき、情報処理装置100の処理について説明する。
[Processing example of information processing device]
Next, the processing of the information processing apparatus 100 will be described based on the flowchart of FIG.
 まず情報処理装置100の使用を開始し、処理を開始する(ステップS20)。ステップS(以下、「ステップ」は省略する)21に進む。 First, use of the information processing apparatus 100 is started, and processing is started (step S20). The process proceeds to step S (hereinafter, “step” is omitted) 21.
 S21では、学習者情報取得部5が、学習者情報を取得する。処理の詳細は、(学習者情報取得部)に記載の通りである。学習者情報取得部5は、学習者情報を指導判定部3に送信し、S22に進む。 In S21, the learner information acquisition unit 5 acquires the learner information. Details of the processing are as described in (learner information acquisition unit). The learner information acquisition unit 5 transmits the learner information to the teaching determination unit 3, and proceeds to S22.
 S22では、顔情報取得部1が、学習者の顔画像を撮像部から取得し、顔画像から学習者の顔情報を取得する(顔情報取得ステップ)。処理の詳細は、(顔情報取得部)に記載の通りである。顔情報取得部1は、取得した顔情報を状態検出部4に送信し、S23に進む。 In S22, the face information acquisition unit 1 acquires the learner's face image from the imaging unit, and acquires the learner's face information from the face image (face information acquisition step). Details of the processing are as described in (Face information acquisition unit). The face information acquisition unit 1 transmits the acquired face information to the state detection unit 4, and proceeds to S23.
 S23では、状態検出部4が、顔情報取得部1が抽出した顔情報に基づき、学習者の視線、瞬き、瞳孔の状態、および顔の各部位の状態を検出する。処理の詳細は(状態検出部)に記載の通りである。検出結果を視認対象特定部2に送信し、S24に進む。 In S23, the state detection unit 4 detects the learner's line of sight, blink, pupil state, and state of each part of the face based on the face information extracted by the face information acquisition unit 1. The details of the processing are as described in (State detection unit). The detection result is transmitted to the visual target specifying unit 2, and the process proceeds to S24.
 S24では、視認対象特定部2が、状態検出部4から取得した検出結果に基づき、学習者が視認している視認対象物を特定する(視認対象特定ステップ)。処理の詳細は、(視認対象特定部)に記載の通りである。視認対象特定部2は、特定した視認対象物に関する情報を、指導判定部3に送信し、S25に進む。 In S24, the visual recognition target specifying unit 2 specifies the visual recognition target object visually recognized by the learner based on the detection result acquired from the state detecting unit 4 (visual recognition target specifying step). The details of the processing are as described in (visual identification target specifying unit). The visual recognition target specifying unit 2 transmits the information regarding the specified visual recognition target object to the guidance determining unit 3, and proceeds to S25.
 S25では、指導判定部3は、記憶部6からリファレンスデータを取得し、S26に進む。 In S25, the instruction determination unit 3 acquires reference data from the storage unit 6 and proceeds to S26.
 S26では、指導判定部3が、視認対象特定部2から取得した視認対象物に関する情報と、記憶部6から取得したリファレンスデータとを参照し、学習者に対する指導要否又は指導要否レベルを判定する(指導判定ステップ)。処理の詳細は、(指導判定部)に記載の通りである。制御部9は、指導判定部3が判定した指導要否又は指導要否レベルを記憶部6に格納してもよい。指導判定部3は、判定した指導要否又は指導要否レベルを表示部7に送信し、S27に進む。 In S26, the instruction determination unit 3 refers to the information regarding the visual recognition target object acquired from the visual recognition target identification unit 2 and the reference data acquired from the storage unit 6 to determine the necessity of instruction or the level of instruction necessity for the learner. Yes (instruction determination step). The details of the processing are as described in the (teaching determination unit). The control unit 9 may store the guidance necessity or guidance necessity level determined by the guidance determination unit 3 in the storage unit 6. The guidance determination unit 3 transmits the determined guidance necessity or guidance necessity level to the display unit 7, and proceeds to S27.
 S27では、表示部7が、指導判定部3から取得した指導要否又は指導要否レベルを表示してS28に進み、処理を終了する。 In S27, the display unit 7 displays the guidance necessity level or the guidance necessity level acquired from the guidance determination unit 3, proceeds to S28, and ends the process.
 〔指導判定部3の具体的構成〕
 図6及び7を参照して、指導判定部3の具体的構成について説明する。図6は、本発明の一実施形態に係る情報処理装置100の指導判定部3の要部構成を示すブロック図であり、図7は、本発明の一実施形態に係る情報処理装置100の指導判定部3が取得する目標情報の一例を示す図である。指導判定部3は、特徴量変換部31、推定結果算出部32、及び指導方法分類部33を備えている。
[Specific configuration of instruction determination unit 3]
A specific configuration of the guidance determination unit 3 will be described with reference to FIGS. FIG. 6 is a block diagram showing a main configuration of the guidance determination unit 3 of the information processing apparatus 100 according to the embodiment of the present invention, and FIG. 7 is a guidance of the information processing apparatus 100 according to the embodiment of the present invention. It is a figure which shows an example of the target information which the determination part 3 acquires. The teaching determination unit 3 includes a feature amount conversion unit 31, an estimation result calculation unit 32, and a teaching method classification unit 33.
 (特徴量変換部) (Feature conversion unit)
 特徴量変換部31は、視認対象物を視認した順番、及び、学習者の学習者情報を参照して、学習者に関する特徴量を算出する。特徴量変換部31は、学習者の問題への注意の分布を特徴量として算出する。視認対象物を視認した順番は、図3及び4に示す「順列」に対応している。学習者の学習者情報には、図3及び4に示す「性格」が含まれる。特徴量変換部31が算出する特徴量は、図3に示す「レベル」に対応している。 The feature amount conversion unit 31 calculates the feature amount regarding the learner by referring to the order in which the visually recognized objects are viewed and the learner information of the learner. The feature amount conversion unit 31 calculates a distribution of attention of the learner as a feature amount. The order of visually recognizing the visually recognizable objects corresponds to the “permutation” shown in FIGS. 3 and 4. The learner information of the learner includes the “personality” shown in FIGS. 3 and 4. The feature amount calculated by the feature amount conversion unit 31 corresponds to the “level” shown in FIG.
 特徴量変換部31は、視認対象物を視認した順番、及び、学習者の学習者情報を含むセンシングデータを参照して特徴量を算出してもよい。センシングデータには、学習者の視線、学習者の性格、学習者の教育履歴、指導者による学習者のラベリング(評価)等が含まれる。センシングデータは、模擬テスト中のビデオ画像や、学習者の視線位置の座標データ等から得られる。 The feature amount conversion unit 31 may calculate the feature amount by referring to the order in which the visually recognized objects are visually recognized and the sensing data including the learner information of the learner. The sensing data includes the learner's line of sight, the learner's personality, the learner's educational history, and the learner's labeling (evaluation) by the instructor. The sensing data is obtained from a video image during the simulation test, coordinate data of the learner's line-of-sight position, and the like.
 特徴量変換部31が算出する特徴量は、学習者の理解度や意欲を表している。特徴量変換部31が特徴量を算出することで、例えば、学習者の理解度の低いポイント等が抽出及び記録される。特徴量変換部31は、例えば、学習者の問題中の各視認対象物への視線停滞時間に基づき、学習者の問題中の各視認対象物に対する注意の分布を特徴量として算出する。特徴量変換部31は、例えば、問題文よりも選択肢の方が視線停滞時間が長い場合には、この問題に対する学習者の理解度が低いと判定し、その判定結果を表す特徴量を算出する。特徴量は、例えば、理解度が最も高い場合をレベル1とし、理解度が最も低い場合をレベル5とする5段階評価であってもよい。 The feature amount calculated by the feature amount conversion unit 31 represents the degree of understanding and motivation of the learner. When the feature amount conversion unit 31 calculates the feature amount, for example, a point or the like that the learner has a low understanding level is extracted and recorded. The feature amount conversion unit 31 calculates, as a feature amount, the distribution of the attention of the learner with respect to each visual target object in question, based on the line-of-sight stagnation time of each visual target object in question of the learner. For example, when the choice is longer than the question sentence, the feature amount conversion unit 31 determines that the learner's degree of understanding of this problem is low, and calculates the feature amount representing the determination result. .. The feature amount may be, for example, a five-level evaluation in which the level of understanding is set to level 1 and the level of understanding is set to level 5.
 視認対象物を視認した順番は、視認対象特定部2により特定され、特徴量変換部31に送られる。学習者情報は、学習者情報取得部5から特徴量変換部31に送られる。特徴量変換部31は、さらに、記憶部6に格納されたデータを取得して、特徴量の算出に使用してもよい。特徴量変換部31は、算出した特徴量及びセンシングデータを、推定結果算出部32に送ると共に、記憶部6にも送ってもよい。 The order in which the visual target is visually recognized is specified by the visual target specifying unit 2 and sent to the feature amount conversion unit 31. The learner information is sent from the learner information acquisition unit 5 to the feature amount conversion unit 31. The feature amount conversion unit 31 may further acquire the data stored in the storage unit 6 and use it for calculating the feature amount. The feature amount conversion unit 31 may send the calculated feature amount and sensing data to the estimation result calculation unit 32 and also to the storage unit 6.
 (推定結果算出部)
 推定結果算出部32は、特徴量変換部31により算出された特徴量を参照して、推定結果を算出する。推定結果算出部32は、特徴量と共に、センシングデータを参照して推定結果を算出してもよい。推定結果算出部32が算出する推定結果は、次回のテストにおける学習者の成績、今後の授業における学習者の意欲、退塾確率、テストを受ける生徒数等の予測である。
(Estimation result calculation unit)
The estimation result calculation unit 32 calculates the estimation result with reference to the feature amount calculated by the feature amount conversion unit 31. The estimation result calculation unit 32 may calculate the estimation result by referring to the sensing data together with the feature amount. The estimation result calculated by the estimation result calculation unit 32 is a prediction of the learner's performance in the next test, the learner's willingness in future classes, the probability of leaving school, the number of students who take the test, and the like.
 推定結果算出部32は、例えば、特徴量がレベル1であれば、次回のテストにおける学習者の成績が良いと推定し、特徴量がレベル5であれば、次回のテストにおける学習者の成績が悪いと推定する。推定結果算出部32は、推定結果を、指導方法分類部33に送ると共に、記憶部6にも送ってもよい。 For example, if the feature amount is level 1, the estimation result calculation unit 32 estimates that the learner's grade in the next test is good, and if the feature amount is level 5, the learner's grade in the next test is high. Estimated to be bad. The estimation result calculation unit 32 may send the estimation result to the teaching method classification unit 33 as well as to the storage unit 6.
 (指導方法分類部)
 指導方法分類部33は、推定結果算出部32により算出された推定結果と、目標情報とを参照して、学習者に対する指導要否又は指導要否レベルを判定する。指導方法分類部33は、推定結果及び目標情報と共に、センシングデータを参照して、学習者に対する指導要否又は指導要否レベルを判定してもよい。指導方法分類部33は、指導要否又は指導要否レベルの判定結果を、表示部7に送ると共に、記憶部6にも送ってもよい。なお、図1に指導判定部3から表示部7への情報の送信について図示しているため、図6には表示部7を示していない。
(Teaching Method Classification Department)
The teaching method classification unit 33 refers to the estimation result calculated by the estimation result calculation unit 32 and the target information, and determines the necessity of teaching or the level of necessity of teaching the learner. The teaching method classification unit 33 may refer to the sensing data together with the estimation result and the target information, and determine the necessity of the guidance or the level of the necessity of the guidance for the learner. The teaching method classification unit 33 may send the determination result of the guidance necessity or the guidance necessity level to the display unit 7 and also to the storage unit 6. Since the transmission of information from the guidance determination unit 3 to the display unit 7 is shown in FIG. 1, the display unit 7 is not shown in FIG.
 指導方法分類部33が判定する指導要否又は指導要否レベルは、図4に示す「指導要否レベル」に対応している。指導要否レベルは、より厳しい指導を要する、指導者を変える、特定の科目を集中的に指導する等の判定結果を表している。指導要否レベルは、目標情報に応じて変更される。 The instruction necessity or instruction necessity level determined by the instruction method classification unit 33 corresponds to the “instruction necessity level” shown in FIG. The instruction necessity level represents a determination result such as requiring more strict instruction, changing an instructor, or intensively instructing a specific subject. The instruction necessity level is changed according to the target information.
 目標情報は、記憶部6に予め記憶された目標情報データに含まれる。目標情報データの一例を図7に示す。目標情報データには、学習者が設定した目標毎の、各教科の目標とする成績が含まれる。例えば、目標1が「○○大学合格」である場合に、「○○大学合格」を達成するために必要な「国語」の目標点数が100点、「数学」の点数が目標200点、「英語」の点数が目標150点である。 The target information is included in the target information data stored in the storage unit 6 in advance. An example of the target information data is shown in FIG. The goal information data includes the target grades of each subject for each goal set by the learner. For example, if the goal 1 is "passing XX university", the target score of "national language" required to achieve "passing XX university" is 100, and the goal of "mathematics" is 200. The goal of "English" is 150 points.
 指導方法分類部33は、センシングデータに含まれる学習者の目標と、学習者の推定結果とを参照し、推定結果の点数と目標点数との差が大きければ指導要否レベルを高くし、推定結果の点数と目標点数との差が小さければ指導要否レベルを低くするように判定してもよい。 The teaching method classification unit 33 refers to the learner's goal included in the sensing data and the learner's estimation result, and if the difference between the score of the estimation result and the target score is large, the instruction necessity level is increased and the estimation is performed. If the difference between the result score and the target score is small, it may be determined that the instruction necessity level is lowered.
 例えば、学習者の推定結果の点数が「国語」50点、「数学」80点、「英語」100点であり、学習者の目標が「目標1」である場合、特に数学の目標点数との差が大きい。したがって、指導方法分類部33は、数学の成績を集中的に上げるような指導方法や、数学の指導に優れた指導者の選択を判定する。また、英語の推定結果の点数と目標点数との差が小さいので、国語及び数学の成績を上げることは諦めて英語の成績を上げるほうが効率的あると判断し、英語の成績を集中的に上げるような指導方法の選択を判定してもよい。 For example, when the learner's estimation result is “Japanese” 50 points, “Math” 80 points, and “English” 100 points, and the learner's goal is “Goal 1”, in particular, The difference is large. Therefore, the teaching method classifying unit 33 determines a teaching method that intensively improves the mathematics results and the selection of a teacher who is excellent in teaching mathematics. In addition, since the difference between the score of the English estimation result and the target score is small, it is judged that it is more efficient to give up the grade in the national language and mathematics and improve the grade in English, and improve the grade in English intensively. The selection of such a teaching method may be determined.
 〔ソフトウェアによる実現例〕
 情報処理装置100の制御ブロック(特に状態検出部4、視認対象特定部2、及び指導判定部3)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、ソフトウェアによって実現してもよい。
[Example of software implementation]
Even if the control blocks of the information processing apparatus 100 (particularly the state detection unit 4, the visual recognition target identification unit 2, and the guidance determination unit 3) are realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like. It may be realized by software.
 後者の場合、情報処理装置100は、各機能を実現するソフトウェアであるプログラムの命令を実行するコンピュータを備えている。このコンピュータは、例えば1つ以上のプロセッサを備えていると共に、上記プログラムを記憶したコンピュータ読み取り可能な記録媒体を備えている。そして、上記コンピュータにおいて、上記プロセッサが上記プログラムを上記記録媒体から読み取って実行することにより、本発明の目的が達成される。上記プロセッサとしては、例えばCPU(Central Processing Unit)を用いることができる。上記記録媒体としては、「一時的でない有形の媒体」、例えば、ROM(Read Only Memory)等の他、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムを展開するRAM(Random Access Memory)などをさらに備えていてもよい。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明の一態様は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。 In the latter case, the information processing device 100 includes a computer that executes the instructions of a program that is software that realizes each function. The computer includes, for example, one or more processors and a computer-readable recording medium that stores the program. Then, in the computer, the processor reads the program from the recording medium and executes the program to achieve the object of the present invention. As the processor, for example, a CPU (Central Processing Unit) can be used. As the recording medium, a "non-transitory tangible medium" such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. Further, a RAM (Random Access Memory) for expanding the above program may be further provided. The program may be supplied to the computer via any transmission medium (communication network, broadcast wave, etc.) capable of transmitting the program. Note that one aspect of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
 [第2実施形態]
 本発明の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
[Second Embodiment]
Another embodiment of the present invention will be described below. For convenience of description, members having the same functions as the members described in the above embodiment will be designated by the same reference numerals, and the description thereof will not be repeated.
 〔情報処理装置(その2)〕
 図8は、本実施形態に係る情報処理装置の構成要素を示すブロック図である。図8に示すように、情報処理装置101は、制御部90が可視化情報生成部91を備えている点において、第1実施形態の情報処理装置100と異なっている。
[Information processing device (2)]
FIG. 8 is a block diagram showing components of the information processing apparatus according to this embodiment. As shown in FIG. 8, the information processing apparatus 101 is different from the information processing apparatus 100 of the first embodiment in that the control unit 90 includes a visualization information generation unit 91.
 情報処理装置101は、学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得部1と、顔情報を参照して、学習者が視認している複数の視認対象物を特定する視認対象特定部2と、視認対象特定部が特定した視認対象物を示す可視化された情報を生成する可視化情報生成部91とを備えている。可視化情報生成部91は、視認対象特定部2が視認対象物を特定した結果を時系列に取得する。可視化情報生成部91は、生成した可視化情報を表示部7に送るとともに、記憶部6に送る。 The information processing apparatus 101 refers to the face information acquisition unit 1 that acquires face information including at least a part of information about a learner's face, and refers to the face information to identify a plurality of visual target objects visually recognized by the learner. The visual identification target specifying unit 2 for specifying the visual identification target specifying unit 2 and the visualization information generation unit 91 for generating the visualized information indicating the visual identification target specified by the visual identification target specifying unit are provided. The visualization information generation unit 91 acquires in time series the result of the visual target specifying unit 2 specifying the visual target. The visualization information generation unit 91 sends the generated visualization information to the display unit 7 and the storage unit 6.
 可視化情報生成部91が生成する可視化された情報には、各時刻においていずれの視認対象物が視認されているかを示す情報が含まれてもよい。また、可視化情報生成部91が生成する可視化された情報には、所定時間内において、各視認対象物が視認された時間に関する情報が含まれてもよい。 The visualized information generated by the visualization information generation unit 91 may include information indicating which visible target object is visually recognized at each time. In addition, the visualized information generated by the visualization information generation unit 91 may include information regarding the time when each visual target is visually recognized within a predetermined time.
 図9は、可視化情報生成部91が生成する可視化された情報の一例を示す図である。可視化された情報は、例えば、図9中(a)に示すように、時刻を横軸とし、各時刻においてテスト(問題Q1~Q5)中のどの問題を視認していたのかを示す経時的な情報をグラフ化したものである。図9中(a)は、時刻t~tに問題Q1を視認し、時刻t~tに問題Q2を視認し、時刻t~tに問題Q3を視認し、時刻t~tに問題Q4を視認し、時刻t~t10に問題Q5を視認していたことを示している。 FIG. 9 is a diagram showing an example of visualized information generated by the visualization information generation unit 91. The visualized information is, for example, as shown in (a) of FIG. 9, with the time as the horizontal axis, which shows which question in the test (questions Q1 to Q5) was visually observed at each time. It is a graph of information. In FIG. 9A, the problem Q1 is visually recognized from time t 1 to t 2 , the problem Q2 is visually recognized from time t 3 to t 4 , the problem Q3 is visually recognized from time t 5 to t 6 , and the time t 7 is shown. viewing the problem Q4 to ~ t 8, it is shown that had been viewing the problem Q5 to time t 9 ~ t 10.
 また、可視化された情報は、例えば、図9中(b)に示すように、テストの総解答時間中に、テスト中の各問題をどの程度の時間視認していたかを示す情報である。図9中(b)は、テストの総解答時間中に、各問題を視認していた時間の割合を円グラフで示している。 Further, the visualized information is, for example, as shown in FIG. 9B, information indicating how long each question in the test was visually observed during the total answer time of the test. In FIG. 9, (b) shows a pie chart showing the ratio of the time during which each question was visually recognized during the total answer time of the test.
 また、可視化された情報は、1つの問題を解いている間の各時刻に、問題文、設問文、下線部、及び選択肢のどの視認対象物を見ていたのかを示す経時的な情報や、1つの問題の総解答時間中に、問題文、設問文、下線部、及び選択肢のそれぞれを視認していた時間の割合を示す情報であってもよい。 In addition, the visualized information is time-series information indicating which visual target of the question sentence, question sentence, underlined portion, and option was seen at each time while solving one question, The information may be information indicating the proportion of time during which the question sentence, the question sentence, the underlined portion, and the options are visually recognized during the total answer time of one question.
 [まとめ]
 本発明の一態様に係る情報処理装置は、学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得部と、前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定部と、前記視認対象特定部が特定した視認対象物の組み合わせに応じて、前記学習者に対する指導要否又は指導要否レベルを判定する指導判定部とを備えていることを特徴とする。
[Summary]
An information processing apparatus according to one aspect of the present invention refers to a face information acquisition unit that acquires face information including information on at least a part of a learner's face, and the face information, and the learner visually recognizes the face information. The instruction determination unit that determines the necessity of instruction or the level of instruction necessity for the learner according to the combination of the visual target specifying unit that specifies a plurality of visual target objects and the visual target specified by the visual target specifying unit. It is characterized by having and.
 上記の態様によれば、学習者が視認している視認対象物の組み合わせに応じて、学習者に対する指導要否又は指導要否レベルを判定することができる。 According to the above aspect, it is possible to determine the necessity of instruction or the level of necessity of instruction for the learner according to the combination of the visual recognition objects visually recognized by the learner.
 前記一態様に係る情報処理装置において、前記指導判定部は、前記視認対象物を視認した順番に応じて、前記学習者に対する指導要否又は指導要否レベルを判定する。 In the information processing device according to the one aspect, the instruction determining unit determines the instruction necessity or the instruction necessity level for the learner according to the order of visually recognizing the visual target.
 上記の態様によれば、学習者が視認対象物を視認した順番に応じて、学習者に対する指導要否又は指導要否レベルを判定することができる。 According to the above aspect, it is possible to determine the necessity of instruction or the level of instruction necessity for the learner according to the order in which the learner visually recognizes the visually recognized object.
 前記一態様に係る情報処理装置において、前記視認対象物には、設問文及び問題文の少なくとも一方が含まれている。 In the information processing device according to the above aspect, the visual recognition target object includes at least one of a question sentence and a question sentence.
 上記の態様によれば、学習者が設問文又は問題文を視認した情報に基づいて、学習者に対する指導要否又は指導要否レベルを判定することができる。 According to the above aspect, it is possible to determine the necessity of instruction or the level of instruction necessity for the learner based on the information that the learner visually recognized the question sentence or the question sentence.
 前記一態様に係る情報処理装置において、前記視認対象物には、設問文中及び問題文中の少なくとも一方の特定の部位が含まれている。 In the information processing device according to the above aspect, the visual recognition target object includes at least one specific part in the question sentence and the question sentence.
 上記の態様によれば、学習者が設問文又は問題文中の特定の部位を視認した情報に基づいて、学習者に対する指導要否又は指導要否レベルを判定することができる。 According to the above aspect, it is possible to determine the necessity of instruction or the level of instruction necessary for the learner based on the information that the learner visually recognizes a specific part in the question sentence or the question sentence.
 前記一態様に係る情報処理装置において、前記指導判定部は、前記学習者の学習者情報をさらに参照して、当該学習者に対する指導要否又は指導要否レベルを判定する。 In the information processing apparatus according to the one aspect, the teaching determination unit further refers to the learner information of the learner and determines the guidance necessity or the guidance necessity level for the learner.
 上記の態様によれば、学習者の学習者情報を参照することで、より正確に学習者に対する指導要否又は指導要否レベルを判定することができる。 According to the above aspect, by referring to the learner information of the learner, it is possible to more accurately determine the instruction necessity or the instruction necessity level for the learner.
 前記一態様に係る情報処理装置において、前記指導判定部は、前記視認対象物を視認した順番、及び、前記学習者の学習者情報を参照して、前記学習者に関する特徴量を算出する特徴量変換部と、前記特徴量を参照して、推定結果を算出する推定結果算出部と、前記推定結果と、目標情報とを参照して、前記学習者に対する指導要否又は指導要否レベルを判定する指導方法分類部とを備えている。 In the information processing device according to the one aspect, the instruction determination unit calculates a feature amount regarding the learner by referring to an order in which the visual recognition target object is visually recognized and learner information of the learner. With reference to the conversion unit, the estimation result calculation unit that calculates the estimation result by referring to the feature amount, the estimation result, and the target information, the necessity or the necessity level of the instruction for the learner is determined. It has a teaching method classification department.
 本発明の一態様に係る情報処理方法は、学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得ステップと、前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定ステップと、前記視認対象特定ステップにおいて特定した視認対象物の組み合わせに応じて、前記学習者に対する指導要否又は指導要否レベルを判定する指導判定ステップとを含むことを特徴とする。 An information processing method according to an aspect of the present invention is a face information acquisition step of acquiring face information including information on at least a part of a learner's face, and the learner visually recognizes the face information with reference to the face information. A visual identification target identifying step of identifying a plurality of visual identification objects, and a guidance determination step of determining whether or not the learner needs guidance or a guidance necessity level according to a combination of the visual identification objects identified in the visual identification target identification step. It is characterized by including and.
 上記の態様によれば、学習者が視認している視認対象物の組み合わせに応じて、学習者に対する指導要否又は指導要否レベルを判定することができる。 According to the above aspect, it is possible to determine the necessity of instruction or the level of necessity of instruction for the learner according to the combination of the visual recognition objects visually recognized by the learner.
 本発明の一態様に係る情報処理プログラムは、前記いずれかの情報処理装置としてコンピュータを機能させるための情報処理プログラムであって、前記顔情報取得部、前記視認対象特定部、及び前記指導判定部としてコンピュータを機能させる。 An information processing program according to an aspect of the present invention is an information processing program for causing a computer to function as any one of the information processing devices, and includes the face information acquisition unit, the visual recognition target identification unit, and the guidance determination unit. Make the computer function as.
 上記の態様によれば、学習者が視認している視認対象物の組み合わせに応じて、学習者に対する指導要否又は指導要否レベルを判定することができる。 According to the above aspect, it is possible to determine the necessity of instruction or the level of necessity of instruction for the learner according to the combination of the visual recognition objects visually recognized by the learner.
 本発明の一態様に係る情報処理装置は、学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得部と、前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定部と、前記視認対象特定部が特定した視認対象物を示す可視化された情報を生成する可視化情報生成部とを備えていることを特徴とする。 An information processing apparatus according to one aspect of the present invention refers to a face information acquisition unit that acquires face information including information on at least a part of a learner's face, and the face information, and the learner visually recognizes the face information. It is characterized by comprising a visual recognition target specifying unit that specifies a plurality of visual recognition target objects, and a visualization information generation unit that generates visualized information indicating the visual recognition target object specified by the visual recognition target specifying unit.
 前記一態様に係る情報処理装置において、前記可視化された情報には、各時刻において何れの視認対象物が視認されているかを示す情報が含まれる。 In the information processing device according to the above aspect, the visualized information includes information indicating which visible target object is visually recognized at each time.
 前記一態様に係る情報処理装置において、前記可視化された情報には、所定時間内において、各視認対象物が視認された時間に関する情報が含まれる。 In the information processing device according to the one aspect, the visualized information includes information regarding a time when each visible target object is visually recognized within a predetermined time.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。 The present invention is not limited to the above-described embodiments, but various modifications can be made within the scope of the claims, and embodiments obtained by appropriately combining the technical means disclosed in the different embodiments Is also included in the technical scope of the present invention.
 1 顔情報取得部
 2 視認対象特定部
 3 指導判定部
 31 特徴量変換部
 32 推定結果算出部
 33 指導方法分類部
 91 可視化情報生成部
 100、101 情報処理装置
DESCRIPTION OF SYMBOLS 1 Face information acquisition unit 2 Visual recognition target identification unit 3 Guidance determination unit 31 Feature amount conversion unit 32 Estimated result calculation unit 33 Guidance method classification unit 91 Visualization information generation unit 100, 101 Information processing device

Claims (11)

  1.  学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得部と、
     前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定部と、
     前記視認対象特定部が特定した視認対象物の組み合わせに応じて、前記学習者に対する指導要否又は指導要否レベルを判定する指導判定部と
    を備えていることを特徴とする
    情報処理装置。
    A face information acquisition unit that acquires face information including at least a part of the learner's face;
    With reference to the face information, a visual identification target specifying unit that identifies a plurality of visual recognition objects visually recognized by the learner,
    An information processing apparatus, comprising: a guidance determination unit that determines whether or not guidance is required or a guidance necessity level for the learner according to a combination of the visual identification objects identified by the visual identification target identification unit.
  2.  前記指導判定部は、前記視認対象物を視認した順番に応じて、前記学習者に対する指導要否又は指導要否レベルを判定する
    ことを特徴とする請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the guidance determining unit determines a guidance necessity or a guidance necessity level for the learner according to an order in which the visual recognition target object is visually recognized.
  3.  前記視認対象物には、設問文及び問題文の少なくとも一方が含まれる
    請求項1又は2に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the visual recognition target object includes at least one of a question sentence and a question sentence.
  4.  前記視認対象物には、設問文中及び問題文中の少なくとも一方の特定の部位が含まれる請求項1から3のいずれか1項に記載の情報処理装置。 The information processing device according to any one of claims 1 to 3, wherein the visual target includes a specific part of at least one of a question sentence and a question sentence.
  5.  前記指導判定部は、前記学習者の学習者情報をさらに参照して、当該学習者に対する指導要否又は指導要否レベルを判定する
    ことを特徴とする請求項1から4のいずれか1項に記載の情報処理装置。
    5. The instruction determination unit further determines the necessity of instruction or a level of instruction necessity for the learner by further referring to the learner information of the learner. The information processing device described.
  6.  前記指導判定部は、
      前記視認対象物を視認した順番、及び、前記学習者の学習者情報を参照して、前記学習者に関する特徴量を算出する特徴量変換部と、
      前記特徴量を参照して、推定結果を算出する推定結果算出部と、
      前記推定結果と、目標情報とを参照して、前記学習者に対する指導要否又は指導要否レベルを判定する指導方法分類部と
    を備えていることを特徴とする請求項1に記載の情報処理装置。
    The teaching determination unit,
    A feature amount conversion unit that calculates a feature amount regarding the learner by referring to the order in which the visually recognized objects are visually recognized, and learner information of the learner,
    An estimation result calculation unit that calculates an estimation result with reference to the feature amount,
    The information processing according to claim 1, further comprising: a teaching method classifying unit that refers to the estimation result and the goal information to determine whether or not the learner needs guidance or a guidance necessity level. apparatus.
  7.  学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得ステップと、
     前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定ステップと、
     前記視認対象特定ステップにおいて特定した視認対象物の組み合わせに応じて、前記学習者に対する指導要否又は指導要否レベルを判定する指導判定ステップと
    を含むことを特徴とする
    情報処理方法。
    A face information acquisition step of acquiring face information including at least a part of the learner's face;
    A visual target specifying step of identifying a plurality of visual target objects visually recognized by the learner with reference to the face information;
    An information processing method, comprising: a guidance determination step of determining whether or not guidance is required or a guidance necessity level for the learner in accordance with a combination of the visual identification objects identified in the visual identification target identification step.
  8.  請求項1から6のいずれか1項に記載の情報処理装置としてコンピュータを機能させるための情報処理プログラムであって、前記顔情報取得部、前記視認対象特定部、及び前記指導判定部としてコンピュータを機能させるための情報処理プログラム。 An information processing program for causing a computer to function as the information processing device according to any one of claims 1 to 6, wherein the computer is used as the face information acquisition unit, the visual recognition target identification unit, and the guidance determination unit. Information processing program to make it function.
  9.  学習者の顔の少なくとも一部の情報を含む顔情報を取得する顔情報取得部と、
     前記顔情報を参照して、前記学習者が視認している複数の視認対象物を特定する視認対象特定部と、
     前記視認対象特定部が特定した視認対象物を示す可視化された情報を生成する可視化情報生成部と
    を備えていることを特徴とする情報処理装置。
    A face information acquisition unit that acquires face information including at least a part of the learner's face;
    With reference to the face information, a visual identification target specifying unit that identifies a plurality of visual recognition objects visually recognized by the learner,
    An information processing device, comprising: a visualization information generation unit that generates visualized information indicating a visual target identified by the visual target identification unit.
  10.  前記可視化された情報には、各時刻において何れの視認対象物が視認されているかを示す情報が含まれる
    ことを特徴とする請求項9に記載の情報処理装置。
    The information processing apparatus according to claim 9, wherein the visualized information includes information indicating which visible target object is visually recognized at each time.
  11.  前記可視化された情報には、所定時間内において、各視認対象物が視認された時間に関する情報が含まれる
    ことを特徴とする請求項9又は10に記載の情報処理装置。
    The information processing apparatus according to claim 9 or 10, wherein the visualized information includes information regarding a time period during which each visual target is visually recognized within a predetermined time period.
PCT/JP2020/003068 2019-02-05 2020-01-29 Information processing device and information processing method WO2020162272A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019018947 2019-02-05
JP2019-018947 2019-02-05
JP2019046234A JP7099377B2 (en) 2019-02-05 2019-03-13 Information processing equipment and information processing method
JP2019-046234 2019-03-13

Publications (1)

Publication Number Publication Date
WO2020162272A1 true WO2020162272A1 (en) 2020-08-13

Family

ID=71947647

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/003068 WO2020162272A1 (en) 2019-02-05 2020-01-29 Information processing device and information processing method

Country Status (1)

Country Link
WO (1) WO2020162272A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5818251B2 (en) * 1978-09-12 1983-04-12 川重車体工業株式会社 Rainwater infiltration prevention device for vehicle sliding windows
JP2008139553A (en) * 2006-12-01 2008-06-19 National Agency For Automotive Safety & Victim's Aid Driving aptitude diagnosing method, evaluation standard determining method for driving aptitude diagnosis, and driving aptitude diagnostic program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5818251B2 (en) * 1978-09-12 1983-04-12 川重車体工業株式会社 Rainwater infiltration prevention device for vehicle sliding windows
JP2008139553A (en) * 2006-12-01 2008-06-19 National Agency For Automotive Safety & Victim's Aid Driving aptitude diagnosing method, evaluation standard determining method for driving aptitude diagnosis, and driving aptitude diagnostic program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Measurement and analysis of eye movement for reading documents and appreciating pictures", IEICE TECHNICAL REPORT, vol. 105, no. 336, 8 October 2005 (2005-10-08) *
HIROKI FUJIYOSHI: "A method of estimating English abilities using eye gaze information of answering English questions", IEICE TECHNICAL REPORT, vol. 115, no. 25, 7 May 2015 (2015-05-07), pages 50 - 51 *

Similar Documents

Publication Publication Date Title
WO2018150239A1 (en) Interactive and adaptive learning and neurocognitive disorder diagnosis systems using face tracking and emotion detection with associated methods
CN106599881A (en) Student state determination method, device and system
KR102262889B1 (en) Apparatus and method for diagnosis of reading ability based on machine learning using eye tracking
CA2790394C (en) Adaptive visual performance testing system
KR102383458B1 (en) Active artificial intelligence tutoring system that support management of learning outcome
JP7099377B2 (en) Information processing equipment and information processing method
Abdulkader et al. Optimizing student engagement in edge-based online learning with advanced analytics
Revadekar et al. Gauging attention of students in an e-learning environment
KR102262890B1 (en) Reading ability improvement training apparatus for providing training service to improve reading ability in connection with reading ability diagnosis apparatus based on eye tracking and apparatus for providing service comprising the same
Kaklauskas et al. Student progress assessment with the help of an intelligent pupil analysis system
CN110275987A (en) Intelligent tutoring consultant generation method, system, equipment and storage medium
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
Khan et al. Human distraction detection from video stream using artificial emotional intelligence
JP2020173787A (en) Information processing apparatus, information processing system, information processing method, and information processing program
JP2005338173A (en) Foreign language reading comprehension learning support device
KR102245319B1 (en) System for analysis a concentration of learner
KR20220061384A (en) Apparatus and method for detecting learners' participation in an untact online class
WO2020162272A1 (en) Information processing device and information processing method
KR20200000680U (en) The Device for improving the study concentration
US20230360548A1 (en) Assist system, assist method, and assist program
Utami et al. A Brief Study of The Use of Pattern Recognition in Online Learning: Recommendation for Assessing Teaching Skills Automatically Online Based
JP7111042B2 (en) Information processing device, presentation system, and information processing program
Boels et al. Automated Gaze-Based Identification of Students’ Strategies in Histogram Tasks through an Interpretable Mathematical Model and a Machine Learning Algorithm
KR102383457B1 (en) Active artificial intelligence tutoring system that support teaching and learning and method for controlling the same
MežA et al. Towards automatic real-time estimation of observed learner’s attention using psychophysiological and affective signals: The touch-typing study case

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20752894

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20752894

Country of ref document: EP

Kind code of ref document: A1