WO2020179128A1 - Learning assist system, learning assist device, and program - Google Patents

Learning assist system, learning assist device, and program Download PDF

Info

Publication number
WO2020179128A1
WO2020179128A1 PCT/JP2019/042527 JP2019042527W WO2020179128A1 WO 2020179128 A1 WO2020179128 A1 WO 2020179128A1 JP 2019042527 W JP2019042527 W JP 2019042527W WO 2020179128 A1 WO2020179128 A1 WO 2020179128A1
Authority
WO
WIPO (PCT)
Prior art keywords
learner
unit
difference
display
model
Prior art date
Application number
PCT/JP2019/042527
Other languages
French (fr)
Japanese (ja)
Inventor
力也 田尻
真理子 味八木
昌樹 高野
さゆり 橋爪
純一 桑田
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2020179128A1 publication Critical patent/WO2020179128A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/24Use of tools
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a learning support system, a learning support device, and a program used when learning a technique.
  • Patent Document 1 discloses a method of extracting and saving the movement of the instructor such as the position, movement amount, and moving speed of the instructor's hand under various environments. With this method, the instructor is asked to perform the same work each time while changing the environment, and the instructor's movements are measured each time by motion capture, etc., and the changes in the environment and movements are stored in association with each other. ..
  • Patent Document 2 a technology has been proposed in which the measured motion information of the instructor is imaged, and the image is superimposed on the actual work scene by augmented reality (AR) technology using a glasses-type wearable device (smart glasses) or the like.
  • AR augmented reality
  • the information processing unit compares the surface shape of the object to be worked measured by the imaging unit with the surface shape of the object stored in advance, and determines each of the work target objects. After calculating the amount to be thermally deformed at the point, calculate the heating position and the heating amount to obtain the deformation amount, and superimpose the image showing the heating position on the image of the work target object on the head mounted display. it's shown.
  • the conventional learning support system that displays the technology of the instructor, which has been converted into data, to the learner, such as Patent Document 2, is suitable for both learners having different proficiency levels and learners having different characteristics such as the dominant hand. To display the technology. In such a system, learners with different proficiency levels and characteristics may not be able to learn efficiently.
  • the present invention aims to provide a learning support system that enables a learner to efficiently learn a technique.
  • the present invention provides a display unit mounted on a learner, an imaging unit mounted on the learner to capture a visual field image of the learner, and an instructor serving as a model for the learner's operation.
  • a storage unit that stores a model moving image, which is a moving image of a work operation, and a calculation unit are provided.
  • the calculation unit superimposes the model video on the visual field image captured by the imaging unit and displays it on the display unit, and dynamically displays the model video according to the characteristics of the learner's work operation included in the visual field image. Change the content.
  • the learner since the displayed model video dynamically changes according to the learner's work operation, the learner can perform training to learn the technique at a pace suitable for each. This allows the learner to efficiently acquire the technique.
  • FIG. 3 is a block diagram showing details of the configuration of the glasses-type device 201.
  • Example of display change of operation element with reduced difference The block diagram which shows the structural example of the control system 101 of a learning support system.
  • the flowchart which shows the flow of the whole learning support system of 1st Embodiment.
  • the figure which shows the input / output sequence of the control system 101, the glasses-type device 201, and the sensing device 204.
  • (A) (b) UI example displayed on the display when inputting basic information and training type.
  • Explanatory drawing of difference judgment, proficiency level evaluation, and reflection An example of a log stored in the information storage unit 17.
  • a flowchart showing the flow in the level judgment mode Explanatory drawing which shows the structure of the learning support system of 2nd Embodiment.
  • FIG. 3 is a block diagram showing a configuration of a glasses-type device worn by a leader.
  • the block diagram which shows the structure of the control system 101 of the learning support system of 2nd Embodiment.
  • the flowchart which shows the flow of operation
  • the block diagram which shows the structure of the learning support system of 3rd Embodiment.
  • the flowchart which shows the flow of operation
  • the flowchart which shows the operation flow of the learning support system of the modification of 3rd Embodiment.
  • Explanatory drawing which shows the structure of the learning support system of 4th Embodiment.
  • this learning support system includes glasses-type device 201 worn by learner 102 and glasses connected to glasses-type device 201 via a network (a wireless LAN (Local Area Network) as an example) 30. And a control system 101 for controlling the function of the molding device 201.
  • the glasses-type device 201 includes a display unit 19 and an imaging unit (camera) 15 that captures a field image of the learner 102.
  • the control system 101 includes a computing unit 16 that performs various computations in this learning support system based on information received from the glasses-type device 201, an information storage unit 17 that stores various information, Equipped with.
  • the arithmetic unit 16 superimposes a model video (see, for example, FIG. 4), which is a video of the work motion of the instructor, which serves as a model of the work motion of the learner 102, on the visual field image captured by the camera 15, and displays it on the display unit 19.
  • the display content of the model moving image is dynamically changed according to the characteristics of the work operation of the learner 102 included in the visual field image.
  • the information storage unit 17 stores at least model data including a model video and position information of the instructor's hand.
  • the calculation unit 16 executes various calculations by executing the learning support program stored in advance in the information storage unit 17. Specifically, the calculation unit 16 superimposes a model moving image on the image capture unit 161 that captures the field of view image of the learner 102 from the camera 15 and the field of view image of the learner 102, and outputs the superimposed processed image to the display unit 19.
  • the flow training operation of FIG. 3 is controlled and executed by the control system 101.
  • the camera 15 captures the visual field image of the learner 102.
  • the image capturing unit 161 captures the learner's view image captured by the camera 15 via the network 30 (step S41).
  • the superimposition display unit 162 reads out the model image stored in the information storage unit 17, and the images of the learner's hands and the probe 202 included in the learner's visual field image show the image of the instructor included in the model video.
  • the positions are adjusted such that the predetermined reference points of the hand and the probe overlap, and the model moving image with the adjusted positions is displayed on the display unit 19 (step S42).
  • the difference calculation unit 163 calculates the difference between the position of the probe 202 held by the learner and the position of the probe held by the instructor, as shown in FIG. Specifically, first, the difference calculation unit 163 calculates the position of the center 202b of the surface of the learner's probe 202 that comes into contact with the surface of the inspection target from the learner's visual field image. Further, the difference calculation unit 163 reads the position of the center 202c of the surface of the probe held by the instructor that touches the inspection target from the model data stored in the information storage unit 17. Then, the difference calculation unit 163 calculates the difference between the position of the center 202c of the probe held by the instructor and the position of the center 202b of the probe 202 held by the learner (step S43).
  • the difference determination unit 16A determines whether or not the difference obtained by the difference calculation unit 163 in step S43 is equal to or greater than a predetermined threshold value, and if the difference is equal to or greater than the threshold value, the result (difference determination result) is displayed.
  • the data is output to the adjusting unit 16C (step S44).
  • the display adjustment unit 16C performs image processing to shift the position of the probe center 202b of the model video to the position of the center 202c of the probe 202 of the learner, and processes the processed model video via the network 30 to the glasses-type device 201. Output to.
  • the model moving image received by the glasses-type device 201 is displayed on the display unit 19 (step S45).
  • step S45 When the model video processed in step S45 is displayed to the learner, the flow returns to step S41.
  • the learning support system repeats the above flow until the learner's training is completed. If the difference calculated by the difference calculation unit 163 in step S44 is less than the threshold value, the flow returns to step S41.
  • the display adjustment unit 16C aligns the hand position of the model video with the hand position of the learner's visual field image with the center 202b of the learner's probe 202 and the center 202c of the instructor's probe as reference points.
  • the display adjustment unit 16C adjusts the display position of the model video by detecting the position of a part of the body of the learner, such as aligning the position of the fingertip of the learner 102 with the position of the fingertip of the instructor of the model video. May be.
  • the display adjustment unit 16C can display the difference in position between the center 202b of the probe 202 held by the learner and the center 202c of the probe held by the instructor by the arrow 102d as shown in FIG.
  • the learner can recognize how much his or her movement deviates from the model by looking at the size and inclination of the arrow 102d. Further, the learner can easily bring the position of the probe 202 held by the learner closer to the position of the probe held by the instructor while looking at the arrow 102d.
  • the position of the instructor's elbow may be included in the model video, and the position information of the elbow may be stored in the information storage unit 17.
  • the difference calculation unit 163 can detect the position of the learner's elbow from the view image of the learner and calculate the difference between the position of the learner's elbow and the position of the instructor's elbow.
  • the display adjustment unit 16C causes the learner to select, for example, “lift the elbow more!” as shown in FIG. It is possible to specifically display in sentences whether or not to move the body.
  • the display adjustment unit 16C may display a hint by displaying a sign such as the arrow 19b on the body part to be moved.
  • the information storage unit 17 may be configured to store a skeleton diagram 19c showing the positions of the bones and joints of the instructor's hand 102b.
  • the display adjustment unit 16C can display the skeleton diagram 19c in an overlapping manner on the image of the instructor's hand 102b.
  • the learner can easily correct the position of his or her hand while observing the position of the joint of the instructor's hand 102b displayed.
  • the information storage unit 17 has the shape of a hand or arm to be worked, and a color distribution diagram showing the magnitude of the force to be applied to each part of the hand at each time point of the work in color. 19d may be stored.
  • the difference calculation unit 163 detects the position and shape of the learner's hand and arm from the learner's visual field image, and the display adjustment unit 16C displays the color distribution map 19d on the learner's hand 102c in an overlapping manner.
  • the display adjusting unit 16C detects a region where the positions of the probe 202 of the instructor and the probe 202 of the learner are overlapped with each other, and the overlapping region is white.
  • the probe of the instructor and the probe 202 can be displayed in different colors for the region where the displacement occurs. In this way, the region where the position of the learner and the instructor are displaced can be displayed in different colors between the learner and the instructor, so that the difference between the positions of the learner and the instructor is emphasized. .. Therefore, the learner can easily recognize the direction in which his/her hand should be moved.
  • the display adjustment unit 16C may be configured to rewind and reproduce the model video until the timing when the position shift starts.
  • the model moving image to be played again may be divided into short time segments and the model moving image may be repeatedly reproduced for each segment.
  • the glasses-type device 201 may be equipped with a vibration sensor. In that case, when the position of the probe 202 held by the learner and the position of the probe held by the instructor deviate from each other, the vibration sensor mounted on the glasses-type device 201 gives the learner force feedback such as vibration and impact. May be.
  • the difference calculation unit 163 calculates the speed at which the learner moves the hand from the image captured by the speed camera 15, and the calculated speed of the learner's hand and the hand of the instructor stored in the information storage unit 17.
  • the difference from the moving speed of is calculated from the difference in the position of the hands of the learner and the instructor.
  • the display adjustment unit 16C adjusts the reproduction speed of the model video to match the learner's speed. ..
  • the display adjustment unit 16C slows down the reproduction speed of the model moving image so as to match the speed of the learner 102.
  • the speed of the learner and the speed of the instructor are displayed by the difference in the color temperature of the displayed image, or by the barometer or numerical value, so that the learner can recognize the difference between his speed and the speed of the instructor. You may do so.
  • the instructor's pressure Information about the pressure when the instructor presses the probe against the inspection target (hereinafter referred to as the instructor's pressure) is stored in the information storage unit 17 in advance.
  • the superposition display unit 162 reads out the pressure information of the instructor from the information storage unit 17, and a circle 103b2 indicating the magnitude of the pressure in terms of the diameter is superposed and displayed on the visual field image of the learner as shown in FIG. ..
  • the difference calculation unit 163 calculates the pressure with which the learner presses the probe 202 on the inspection target from the change in the color of the hand, fingertip, or nail when the learner presses the probe 202 on the inspection target.
  • the display adjustment unit 16C determines whether the pressure of the instructor and the pressure of the instructor.
  • the difference is displayed by, for example, the size of the circle and the difference in the color of the circle.
  • the difference between the pressure of the instructor and the pressure of the learner is emphasized by filling the area surrounded by the circle 103b1 indicating the pressure by the learner 102 and the circle 103b2 indicating the pressure by the instructor as shown in FIG. Can be displayed. By moving the probe so as to reduce the filled area of this circle, the learner can be trained to press the test object with a pressure close to the pressure of the instructor.
  • the superimposition display unit 162 reads out the information on the motion locus from the information storage unit 17, and superimposes and displays the flow line indicating the motion locus on the field image of the learner.
  • the color of the motion locus of the instructor is a past locus (past motion locus) 103c or a future locus (future motion locus) relative to the current position of the learner's probe 202. Different colors may be displayed depending on 103d.
  • the difference calculation unit 163 calculates the movement locus of the probe 202 of the learner from the visual field image of the learner captured by the camera 15, and calculates the movement locus of the probe 202 of the learner and the movement locus of the probe of the instructor. Find the difference.
  • the difference determination unit 16A determines that the difference is equal to or larger than the predetermined value
  • the display adjustment unit 16C sets the motion locus of the learner's probe 202 to the motion loci 103c and 103d of the instructor's probe that are already displayed.
  • the learner may be made to recognize the difference between the motion trajectory of the instructor and the motion trajectory of the learner by superimposing and displaying the flow line shown.
  • the superimposed display unit 162 reads the gazing point information from the information storage unit 17, and superimposes and displays the point 103a indicating the gazing point on the learner's visual field image.
  • the glasses-type device 201 includes a line-of-sight sensor 20 as shown in FIG. 11, which will be described later.
  • the line-of-sight sensor 20 detects the gazing point of the learner.
  • the difference calculation unit 163 calculates the difference between the gazing point of the learner and the gazing point 103a of the instructor stored in the information storage unit 17.
  • the display adjustment unit 16C When the difference determination unit 16A determines that the difference between the learner and the gaze point 103a of the instructor is equal to or greater than the threshold value, the display adjustment unit 16C highlights the gaze point 103a of the instructor by blinking or the like. In addition to the gaze point 103a of the instructor, the display adjustment section 16C may display the gaze point of the learner detected by the line-of-sight sensor 20 on the display section 19 using a cross symbol or the like.
  • the glasses-type device 201 may be provided with a recording device that records a work sound.
  • the sound recording device acquires the work sound of the learner by the sound recording device and records the work sound of the learner recorded by the difference calculation unit 163 and the work sound of the model moving image stored in the information storage unit 17. Calculate the difference.
  • the display adjustment unit 16C displays that the sounds are deviated due to the waveform, the color, the character, and the like.
  • the difference calculation unit 163 may also be configured to obtain the cause of the difference between the work operation of the learner and the work operation of the instructor for the operation element in which the work operation of the learner and the work operation of the instructor are misaligned.
  • the display adjustment unit 16C may cause the display unit 19 to display which of the action elements is statistically likely to occur and the cause thereof.
  • the glasses-type device 201 includes glasses, a camera 15, a display unit 19, a wireless communication device 13, a CPU 12, a memory 18, and a line-of-sight sensor 20 mounted on a part of the glasses. Will be done.
  • the camera 15 captures an image in a predetermined direction such as the direction of the user's line of sight, and measures the user's work motion.
  • the display unit 19 projects the model moving image adjusted by the display adjustment unit 16C under the control of the calculation unit 16 and the CPU 12 into the visual field of the user.
  • the display unit 19 may have a structure that projects an image on the retina of the user.
  • a wearable display may be used instead of the glasses. In this case, the visual field image of the user captured by the camera 15 and the model moving image adjusted by the display adjustment unit 16C are superimposed and displayed on the wearable display.
  • the wireless communication device 13 communicates between the CPU 12 and the control system 101.
  • the CPU 12 controls the operation of each part of the glasses-type device 201 based on the information received from the calculation unit 16 of the control system 101. Specifically, the wireless communication device 13 is instructed to transmit the video data captured by the camera 15 to the video capture unit 161 and the information received from the superimposed display unit 162 and the display adjustment unit 16C is displayed in the display unit 19. Or send it to and display it on the video.
  • the image captured by the camera 15 and the calculation result of the CPU 12 are stored in the memory 18 as needed. Further, the memory 18 can also store the image to be displayed on the display unit 19.
  • the line-of-sight sensor 20 is a sensor that detects where the user's gazing point is.
  • the line-of-sight sensor 20 transmits the detected gazing point data to the CPU 12.
  • the display adjustment unit 16C is described as adjusting the position of the probe held by the instructor to the position of the probe 202 held by the learner at any time.
  • the information storage unit 17 includes a beginner mode for basic training that enables the trainer's hand to be aligned with the learner's pace at any time, and the trainer's hand only when the difference exceeds a predetermined value.
  • a plurality of training modes (Table 1) may be stored, such as a normal mode in which the position of is corrected.
  • the training mode stored in the information storage unit 17 will be described.
  • the model video follows the work operation of the learner 102 during the reproduction of the model video, and the superimposed position and operation speed of the model video are dynamically changed.
  • the learner since the model video is adjusted according to the learner's operation pace, the learner can closely observe the work of the instructor and learn the correct work pattern.
  • Normal mode is a training mode that emphasizes the speed of the instructor's work and plays the model video so that the learner can improve the quality of the work while getting used to it.
  • the model video is basically played at the work speed of the instructor, and only when the work operation of the learner 102 is delayed by a certain amount or more, the superimposed position and the playback speed of the model video match the learner 102. Controlled.
  • each training mode the display adjustment unit 16C displays the display content of the model video according to the difference determination result by the difference determination unit 16A of the learner 102. It is changed dynamically according to the work operation. Specifically, the display adjustment unit 16C causes the learner 102 to display, based on the difference determination result, an action element whose difference is equal to or greater than a predetermined value in a manner that the learner 102 can easily recognize it than an action element whose difference is less than the predetermined value. As described above, the transmittance of each operation element of the model video is dynamically changed, and the presence/absence of display is dynamically switched.
  • a circle 103b2 indicating the pressure of the instructor, a gaze point 103a of the instructor, and movement loci 103c and 103d are displayed, and the pressure and the gaze point of the learner are displayed.
  • the difference between the instructor's pressure and the gazing point changes so as to decrease, it means that the learner is working with an appropriate pressure and gazing point.
  • transparency control is performed to increase the transparency of the displayed pressure 103b2 and the gazing point 103a of the instructor.
  • the display adjustment unit 16C displays the operation element, which has a low transparency and is emphasized, with respect to the operation element that changes so that the difference between the work operation of the learner and the work operation of the instructor increases.
  • the information storage unit 17 stores a plurality of modes as a learning support program as shown in Table 1, for example. Specifically, in addition to the above-mentioned beginner mode and normal mode, the level determination mode, the guidance mode, and the look-back mode are stored in the information storage unit 17.
  • the calculation unit 16 of the control system 101 When executing these modes, the calculation unit 16 of the control system 101, in addition to the video capturing unit 161, the superimposition display unit 162, the difference calculation unit 163, the difference determination unit 16A, and the display adjustment unit 16C, as illustrated in FIG.
  • a level evaluation unit 16B that evaluates the skill level of the learner 102 based on the difference determination result by the difference determination unit 16A and a statistical calculation unit 16D that statistically calculates the skill data of the learner 102.
  • a known method can be used as a method in which the superimposed display unit 162 aligns and superimposes the learner's visual field image and the model moving image.
  • the superimposed display unit 162 is attached to or around the body (hand, etc.) of the learner 102, an object to be ultrasonically examined, a probe possessed by the learner, or the like, which is in the line of sight of the learner 102 during skill acquisition.
  • the mark is used as a reference marker to determine the display position of the model video.
  • the function of the calculation unit 16 is realized as software mounted on the CPU or GPU. Further, some or all the functions of the arithmetic unit 16 can be realized by hardware such as ASIC (Application Specific Integrated Circuit) and FPGA (Field Programmable Gate Array).
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the information storage unit 17 in addition to a learning support program including a training mode for skill acquisition, basic information of the learner 102 (for example, differences in individual physical abilities such as age, dominant hand, habit, visual acuity, grip strength) and training are provided.
  • the data table of the learner 102 including the model data regarding the work operation performed by the instructor according to the type of, the log of the technical data of the learner 102, the weakness list of the learner 102, and the like is stored.
  • the model data stored in the information storage unit 17 includes a model video, a plurality of motion elements of the instructor's work motion (position, speed, pressure, motion locus, gazing point, and sound), basic information of the model motion video (for example, it includes the dominant hand, the examination site to perform ultrasonic examination in the model video, the examination content, the body type of the patient), the display method of the difference when the difference between the movements of the instructor and the learner for each movement element, etc. .. Further, the information storage unit 17 stores the image captured by the camera 15 and the calculation result of the calculation unit 16 as necessary. Further, the information storage unit 17 can also store the video to be displayed on the display unit 19.
  • the sensing device 204 is connected to the control system 101 via the network 30.
  • the sensing device 204 can include a pressure sensor, a tilt sensor, a temperature sensor, a humidity sensor, an optical sensor, a haptic device (e.g., a vibrator) that gives haptic feedback to the learner 102, and the like.
  • the sensing device 204 is connected to the control system 101, the information acquired by the sensing device 204 can be used for the calculation by the calculation unit 16.
  • the multiple superimposition display unit 162 can convert information such as pressure and inclination detected by the sensing device 204 into an image that can be displayed on the display unit 19 and superimpose the information on the visual field image of the learner 102.
  • the sensing device 204 is equipped with a sensor that acquires information such as temperature, humidity, and light intensity that cannot be acquired from the camera 15, the information acquired by the sensor can be used for display control of the model video by the display adjustment unit 16C. it can. Even if the information can be acquired from the camera 15, the information may be acquired with higher accuracy than the camera 15 by using the sensing device 204. It is preferable to use, as the sensing device 204, a tool having the same shape as a tool used for training (for example, a probe of an ultrasonic imaging apparatus).
  • Step S1 the control system 101 receives an input of basic conditions required for training from a learner. Specifically, the control system 101 causes the display unit 19 to display a UI screen as shown in FIG. The learner inputs the learner's basic information and the type of training on the displayed UI screen. The input information is sent to the control system 101 via the network 30 and stored in the information storage unit 17. As a result, the content for executing the training program is activated and delivered to the glasses-type device 201.
  • Step S2 the control system 101 distributes the content in the guidance mode to the glasses-type device 201, and the guidance mode is implemented.
  • the learner performs a work operation according to the guide displayed on the display unit 19, so that the learner 102 should perform the work operation in the training mode to be executed later, confirm the displayed content, and perform training. You can learn how to use contents such as motivation.
  • this mode the difference between the skill level of the learner 102, the work action of the learner and the work action of the instructor, how the proficiency level determination is displayed, the difference between the training modes, and the like are described. 102 is displayed.
  • the learner 102 can experience and learn how the information is displayed.
  • the guidance mode may be performed after the level determination mode, or may be repeatedly performed until the learner learns how to use the content.
  • Step S3 the control system 101 distributes the content in the level determination mode to the glasses-type device 201, and the level determination mode is executed. Although a specific flow of this mode will be described later, in this mode, the learner works while the model moving image is not reproduced.
  • the difference determination unit 16A and the level evaluation unit 16B evaluate the current proficiency level of the learner 102, and set the difficulty level of the next training mode (beginner mode, normal mode, etc.) according to the proficiency level evaluation. ..
  • the proficiency evaluation method is the same as the level evaluation method in each training mode.
  • Step S4 Next, the training is performed in the training mode set in the level determination mode.
  • the specific flow in the training mode is as described in FIG. 3, and the difference calculation unit 163, based on the visual field image of the learner 102 obtained from the camera 15 and the sensing device 202, each operation element (position, speed, Regarding pressure, motion locus, gazing point, and sound), a difference between the motion of the learner 102 and the motion of the instructor included in the model video is calculated.
  • the difference calculation unit 163 allocates six operation elements for each unit time and obtains the difference between each operation element divided for each unit time. As shown in the spider chart of FIG.
  • the difference determination unit 16A determines that the difference calculated by the difference calculation unit 163 for each operation element is the allowable difference size (threshold value) predetermined for each operation element. It is determined whether it is 116 or more.
  • the display adjustment unit 16C adjusts the display content of the model moving image.
  • the motion elements whose difference is less than or equal to the threshold value 116 are motion elements that the learner was able to do well during the training, and motions whose difference exceeds the threshold value 116.
  • Elements Sounds in this figure
  • Step S5 When the training mode ends, the control system 101 activates the content for determining the skill level, and the level evaluation unit 16B determines the skill level of the learner under training. Specifically, as shown in the spider chart of FIG. 17, the level evaluation unit 16B uses the difference determination result by the difference determination unit 16A to determine which of the six operating elements has a difference of 116 or less. Evaluate the learner's proficiency according to the existence
  • the relationship between the number of motion elements that have become less than or equal to the threshold value 116 and the proficiency level can be set arbitrarily. For example, out of the six action elements, the number of action elements having a threshold value of 116 or less and the number of levels may be the same, and when the number of action elements having a threshold value of 116 or less is five, as shown in FIG.
  • the evaluation unit 16B may determine the proficiency level of this learner as "level 5".
  • the learning mode and the content can be focused on in the learning to be performed next by the same learner 102. Further, for example, when the level of the learner exceeds a certain level such as the difference of all the motion elements becomes equal to or smaller than the threshold value 116 in the training result in the beginner mode, the next mode of training is advanced to the normal mode. You can
  • Step S6 Next, the proficiency level determined in step S5 is displayed to the learner by the glasses-type device 201, and the look-back mode is executed according to the proficiency level.
  • the learner 102 is reproduced the work motion of the learner in the training mode of step S4, and the learner objectively grasps the current skill level.
  • the superimposed display unit 162 reads the stored visual field image of the learner in this mode.
  • a retrospective video with a model video superimposed on the field of view video is displayed.
  • the work motion of the learner 102 which causes the motion element in which the difference becomes equal to or larger than the threshold value 116 (that the learner cannot do well), may be called and reproduced. This allows the learner to objectively grasp how different his/her current technique is from the model technique or where his/her weaknesses are.
  • the statistical calculation unit 16D generates a statistical graph (such as a histogram or a radar chart as shown in FIG. 17) based on the difference determination result of each operation element determined by the difference determination unit 16A, and learns with the statistical graph.
  • the skill level of the person may be displayed.
  • skill data and a list of weak points as shown in FIG. 18 may be displayed on the display unit 19 to be displayed to the learner.
  • the list and the statistical graph are stored in the information storage unit 17.
  • the retrospective mode may be executed after the level determination mode, and is preferably executed after each training mode.
  • the learner 102 can perform the next training while recognizing his/her current skill level. Through the above steps, the training for the learner 102 to acquire the technique is carried out.
  • step S3 a specific flow of the level determination mode of step S3 will be described with reference to FIGS. 15 and 19.
  • Step S31 When the level determination mode starts, the content of the level determination mode is displayed on the display unit 19 of the glasses-type device 201, and the camera 15 captures the view image of the learner 102 from the view image of the learner 102. The visual field image obtained from the camera 15 is transmitted to the control system 101. Further, the sensing device 202 measures the work operation of the learner 102 with each sensor, and transmits the measured data to the control system 101.
  • the level judgment may be performed by the same work as the training mode performed later by the learner, or the level judgment may be performed by basic work simpler than the training mode.
  • the level is determined by the same work as the training mode, the number of modes stored in the information storage unit 17 can be reduced.
  • the level determination is performed by a basic work that is easier than the work performed in the training mode, the number of modes stored in the information storage unit 17 increases, but it is determined whether the level of the learner 102 has reached the basic level. be able to.
  • Step S32 Based on the information obtained from the camera 15, the difference calculation unit 163 calculates the difference between the work motion of the learner 102 and the work motion of the model (not displayed in this mode) for the six motion elements.
  • the difference determination unit 16A determines whether or not the difference with the hand of the learner 102 is greater than or equal to a threshold in each action element, and transmits the determined difference determination result to the level evaluation unit 16B.
  • the level evaluation unit 16B determines the proficiency level of the learner according to the difference determination result, and determines whether to perform the training in the normal mode or the beginner mode.
  • the level evaluation unit 16B determines to perform training in the normal mode when the level of the learner is equal to or higher than the predetermined level, and determines to perform training in the beginner mode when the level of the learner is equal to or lower than the predetermined level.
  • Step S35 The level evaluation unit 16B transmits the level determination result determined in step S34 to the glasses-type device 201, and the level of the learner is displayed. Further, similarly to the look-back mode performed after the execution of the training mode, it is preferable to execute the content that looks back on the work operation of the learner in the level determination mode.
  • Step S36 The display adjustment unit 16C constructs the training content according to the mode determined in step S34, and distributes the training content to the glasses-type device 201. After delivering the training content, the training flow by the learning support system returns to step S4 of FIG. 14, and the training mode is executed.
  • the display adjusting unit 16C may change the reproduction speed of the model moving image or adjust the display by other methods in addition to the reproduction speed of the model moving image according to the training mode to be constructed.
  • the playback speed is set. There are other methods than dropping. For example, when the learner's motion is delayed by a certain amount or more, the display adjustment unit 16C may pause the model video and construct content to be rewound and played back so as to start over from the previous process.
  • the normal mode is a mode in which the learner trains according to the movement speed of the instructor, but even in the normal mode, if the learner's work movement is delayed by a certain amount or more, the display adjustment unit 16C is used as a model video. The learner's work may be able to catch up with the model video by pausing the playback of.
  • the display adjustment unit 16C may construct the content so that when the respective motion elements of the work motion of the learner match the motion elements of the instructor, the match is displayed to the learner.
  • the display method feedback by display, sound, or vibration can be used. If the method of displaying a match is common to all motion elements, the learner can easily recognize that the matched motion element can be worked as a model.
  • the display adjustment unit 16C improves the deviation between the learner's motion element and the instructor's motion element, which has been observed for a certain period of time, by a predetermined value or more (when the learner 102's technique has improved by a certain level or more).
  • the content may be constructed so as to give the learner 102 feedback that makes the learner feel improvement.
  • the display adjustment unit 16C can display to the learner that the level has been increased by feedback such as display, sound, and vibration, or by highlighting an improved operation element.
  • the display adjustment unit 16C constructs the content that gives the learner tactile feedback such as vibration and impact by the sensing device 202 in addition to the visual and auditory feedback by display and sound to the learner. You may make it have a feeling of actually working.
  • the display of the model video can be dynamically changed according to the work operation of the learner. Therefore, each learner, such as a learner with a different level or a learner with different characteristics, can train the technique at a learning pace that suits them, and it becomes easier to learn the skill.
  • the learning support system of the first embodiment subtle tips that are difficult to express in words and the like, which are necessary for learning the technique, are extracted as motion elements, and the difference between the techniques of the instructor and the learner for each motion element. Since the difference determination result is obtained and displayed, the learner can grasp, for each action element, the skill of the skill and the detailed technique, and how much the skill the user has. Since the difference determination result is displayed in a display method that the learner can intuitively learn, the learner can learn the technique by intuitive operation.
  • the learning support system of the first embodiment information such as pressure and line of sight, which is originally difficult to understand only by appearance, is superimposed and displayed on the learner's visual field image, so it is a trick that is difficult to learn when the instructor directly teaches the technique.
  • the learner can learn while checking.
  • the learning support system of the first embodiment an example in which one learner performs training using the learning support program stored in the control system 101 has been shown. It is possible to train at the same time or at different timings by using each of the eyeglass-type devices connected via the. Therefore, in the learning support system of the first embodiment, the advanced technology acquired by the instructor can be efficiently transmitted to many learners.
  • the skill is mastered in the two training modes of the beginner mode and the normal mode, but the training mode may be one mode or three or more modes. .. Further, the information storage unit 17 may store a mode effective for a learner's skill acquisition in addition to the above modes.
  • control system 101 the calculation unit 16 and the information storage unit 17
  • the internal configuration of the control system 101 may be stored in a cloud connected to the glasses-type device 201 via the network 30.
  • the learning support system of the second embodiment includes a glasses-type device 201 and a control system 101 connected to the glasses-type device 201 via a network, like the learning support system of the first embodiment.
  • the display adjustment of the model moving image displayed using the model data of the instructor already stored in the information storage unit 17 of the control system 101 is performed.
  • the new model data is generated from the model video of the instructor that is not currently stored in the information storage unit 17, and the newly generated model data is used for display adjustment.
  • the learning support system of the second embodiment is different from the learning support system of the first embodiment.
  • the learning support system of the second embodiment includes a glasses-type device 201 that can be worn by the learner 102 and a device that can be worn by the instructor 103 that is away from the learner (for example, glasses).
  • Type device) 203 and a glasses-type device 201, and a control system 101 connected to the glasses-type device 203 via the network 30.
  • the glasses-type device 203 has the same configuration as the glasses-type device 201, and as shown in FIG. 21, the camera 15B for capturing the visual field image of the instructor 103, the wireless communication device 13, the CPU 12, the memory 18, and the learner's operation.
  • a display unit 19B for displaying an image is provided.
  • the glasses-type device 203 may be provided with a device for recording the work sound, a speaker for reproducing the work sound of the instructor, and the like as necessary.
  • control system 101 has the same configuration as that of the control system 101 in the first embodiment, and includes a calculation unit 16 that processes information received from the glasses-type device 201 and glasses-type device 203, and model data. It includes an information storage unit 17 that stores the information.
  • the calculation unit 16 includes a video capture unit 161 that captures the visual field image of the learner 102 and the visual field image of the instructor 103, the superimposed display unit 162, and the difference calculation unit 163. At least a difference determination unit 16A, a display adjustment unit 16C, and a model generation unit 164 that generates model data from the instructor's visual field image.
  • the learner performs work for training.
  • the camera 15 captures the visual field image of the learner 102 (step S41).
  • the video capturing unit 161 receives the visual field image captured by the camera 15, and outputs the visual field image of the learner to the display unit 19B of the glasses-type device 203 of the instructor 103 (step S41B1).
  • the instructor sees the learner's training movement displayed on the display unit 19B, grasps the characteristics of the learner's work movement, and performs a model work in a style that matches the characteristics of the learner's work movement. ..
  • the camera 15B captures a visual field image of the instructor 103 while the instructor 103 is performing a model work.
  • the image capturing unit 161 receives the visual field image captured by the camera 15B, and the model generation unit 164 generates a model video from the visual field image of the instructor 103 and also generates model data from the example video of the instructor 103 (step S41B3). ).
  • the model data newly generated from the model video of the instructor 103 is stored in the information storage unit 17, and is used when the display adjustment unit 16C adjusts the display content according to the work operation of the learner.
  • the step S42 and subsequent steps (steps S42 to S45) in which the display adjusting unit 16C superimposes and displays the model moving image and the visual field image of the learner are the same as those in the first embodiment.
  • the instructor's technique not stored in the information storage unit can be utilized for the learner's learning support.
  • the effect that can be obtained is obtained.
  • the learner can learn the technique of the instructor who is away from the learner.
  • the work video of the instructor not stored in the information storage unit is recorded to create a model video, and the created model video is used for the learner's learning support.
  • the examples of allowing learners to receive training support from remote leaders are not limited to the above examples.
  • each other's video and audio it is possible to send each other's video and audio to the instructor's glasses-type device and the learners' glasses-type device.
  • an image of the learner performing a training motion using the glasses-type device is displayed, and the instructor watches the state of the training motion of the learner, Advise the learner on the tips of the work, etc. by movements and voices.
  • the instructor's action image data and voice data are sent to the learner's glasses-type device, and the instructor's action image is displayed on the learner's glasses-type device, and the instructor's voice is played. It will be done. In this way, the learner can learn the technique more efficiently by transmitting the advice from the instructor who is far from the learner to the learner who is training.
  • the artificial intelligence (AI) that has learned the techniques of a plurality of instructors selects the model video that is the closest to the characteristics of the learner's work motion as the learner's work motion progresses.
  • the displayed model video changes every hour.
  • the learning support system of the third embodiment includes a glasses-type device 201 and a control system 101 connected to the glasses-type device 201 via a network.
  • control system 101 has the same configuration as the control system 101 in the first embodiment, and includes a calculation unit 16 and an information storage unit 17 as shown in FIG. 24. Further, the configuration of the calculation unit 16 in the third embodiment is the same as that of the calculation unit 16 in the first embodiment, and the video capture unit 161, the superimposed display unit 162, the difference calculation unit 163, and the difference determination unit 16A , A display adjusting unit 16C is provided at least.
  • the information storage unit 17 in the third embodiment is an artificial intelligence that changes the display of the model video by using the storage unit 171 including the learner's basic information, skill data, and the weakness list, and the technical features of a plurality of instructors. And a section 172.
  • a display unit 172B As a result of selecting the learned model 172A for selecting the combination data having the feature closest to the feature of the learner's work motion, and the model video to be displayed from the combination data selected by the learned model 172A, in the artificial intelligence unit 172.
  • a display unit 172B is a display unit 172B.
  • the learned model 172A is a model in which a large number of combination data of learner characteristics and instructor characteristics are learned in addition to the model video.
  • the learned model 172A is composed of, for example, a neural network.
  • As a combination of the learner's characteristics and the teacher's characteristics learned by the learned model 172A there are six motion elements (position, speed, pressure, motion locus, gazing point, and sound) of work motion, basic information (for example, dominant hand). , The examination site where the ultrasonic examination is trained, the examination contents, and the patient's body shape) are included.
  • the camera 15 captures the visual field image of the learner 102.
  • the image capturing unit 161 captures the view image of the learner captured by the camera 15 via the network 30.
  • the trained model 172A extracts the characteristics of the learner's work movement from the learner's visual field image, and compares the characteristics of the learner's movement with a large number of combination data learned in the trained model 172A.
  • the combination data having the feature closest to the learner's feature is selected (step S411).
  • the result display unit 172B selects, from the combination data selected by the learning model 172A, a model moving image having characteristics closest to the learner's characteristics, and transfers the model moving image to the superimposition display unit 162.
  • the superimposition display unit 162 superimposes and displays the received model video on the visual field image of the learner (step S42).
  • the flow of motion returns to step S41, and this flow is repeated every unit time.
  • the model video closest to the characteristics of the learner's work motion is changed and displayed, so that the model video most suitable for the learner's instruction is displayed every hour. To be done. This makes it possible to provide training content that matches the characteristics of the learner at any time during the training, and the learner can learn the technique more efficiently.
  • the artificial intelligence unit changes the method of displaying the difference between the work elements of the learner and the instructor in which the difference is larger than the threshold value, depending on the magnitude of the difference between the respective operation elements.
  • the learning support system of the modification has the same configuration as the learning support system of the third embodiment, but the contents of the combination data learned by the learned model 172A are different.
  • the learned model 172A includes a large number of combinations of the magnitude of the difference for each motion element and the optimal method for displaying the magnitude of the difference. For example, regarding the difference between the positions of the elbows of the learner and the instructor, when the difference is less than or equal to a predetermined value, it is displayed with the letters “lift the elbow more!” as shown in FIG.
  • the combination of the size of the difference and the optimum display method is learned, such as showing with a sign like 19b and superimposing the skeleton diagram 19c as shown in Fig. 6 on the hand of the instructor of the model video when the difference is large. Learned by the model 172A.
  • the learned model 172A includes model videos, learner characteristics and teacher characteristics.
  • the characteristics of the learner and the instructor are the six motion elements (position, speed, pressure, motion trajectory, gazing point, and sound) of the work motion, and basic information (for example, dominant hand, examination for training ultrasonic examination).
  • the site, examination contents, patient's body type, etc. are included.
  • steps S41 and S42 from when the camera 15 captures the visual field image of the learner 102 to when the model video is superimposed on the visual field image are the flow in the first embodiment ( 3).
  • the difference calculation unit 163 extracts each of the learner's six motion elements from the learner's visual field image, and reads out each of the instructor's six motion elements from the model 172A. ..
  • the difference calculation unit 163 calculates the difference between the action of the learner and the action of the instructor for each action element. (Step S43C).
  • the difference determination unit 16A determines whether or not the difference calculated by the difference calculation unit 163 is greater than or equal to a threshold for each action element, and notifies the artificial intelligence unit 172 of the determination result (step S44C).
  • the learned model 172A captures the visual field image of the learner, and extracts the motion element whose difference is equal to or more than the threshold and the magnitude of the difference.
  • the trained model 172A compares the extracted motion elements and the magnitude of the difference with the combination data trained in the trained model 172A to obtain combination data including the optimum display method of the difference for each motion element.
  • Select step S45C1.
  • the result display unit 172B determines the most effective method for training the learner as a method for displaying the difference between each motion element from the combination data selected by the learning model 172A, and adjusts the display using the method.
  • To the display adjustment unit 16C step S45C2).
  • the display adjustment unit 16C adjusts the display method of the model moving image and displays it on the display unit 19 according to the instruction received from the result display unit 172B (step S45).
  • step S45 the display method of the model moving image and displays it on the display unit 19 according to the instruction received from the result display unit 172B.
  • the difference between the learner's motion and the instructor's motion is displayed by an optimum display method for each motion element, so that the learner is displayed. You can easily understand how you should change your behavior by looking at the differences.
  • the learning support system of the fourth embodiment will be described by referring to differences from the learning support system of the first embodiment.
  • the learning support system according to the third embodiment includes an eyeglass-type device 201 and a control system 101 connected to the eyeglass-type device 201 via a network 30, and the network 30 further includes an ultrasonic wave.
  • the imaging device 104 is connected.
  • the ultrasonic imaging device 104 may have the same configuration as the conventional ultrasonic imaging device.
  • the captured image acquired by the ultrasonic image pickup device is displayed side by side on the display unit 19 of the glasses-type device 201, in parallel with the work video of the learner 102 and the model video, as shown in FIG. 28, for example.
  • the work video of the learner 102 and the model video as shown in FIG. 28, for example.
  • the user's viewpoint is not on the hand holding the probe, but on the screen on which the captured image is displayed. Since the user cannot see the motion at hand and the captured image at the same time, it is difficult for a learner unfamiliar with the technique to view the screen while operating the probe, which is one of the causes for making learning difficult.
  • the image captured by the ultrasonic image pickup device is displayed around the image at the learner's hand by the glasses-type device 201, so that the movement of the viewpoint of the learner 102 is minimized. It is possible to learn by intuitively grasping the movement of the hand and the change in the captured image due to the movement.
  • the sensing device 204 described in the first embodiment may be connected to the control system 101, and the learner 102 may use the sensing device 204 during training. Further, in the fourth embodiment, a sensor similar to the sensor included in the sensing device 204 is incorporated in the probe of the ultrasonic imaging apparatus 104 so that the learner 102 can learn the technique of handling the probe while using the actual probe. May be.
  • control system 101 is connected to the glasses-type device 201 via the network 30 , but the configuration of the control system 101 is not limited to the above example.
  • the control system 101 may be mounted in the memory 18 in the glasses-type device 201. Further, the control system 101 may be mounted in a memory that can be externally attached to the glasses-type device 201.
  • the glasses-type device 201 is used as the learning support device, but the learning support device only needs to include at least a camera and a display unit.
  • a helmet or hair band equipped with a camera and a user's It may be a display arranged nearby or a combination of portable displays.
  • the present invention can be applied to learning in all technical fields. For example, it may be used for learning surgical methods, rehabilitation, sports, how to move the body in traditional performing arts, nursing care methods, cooking methods, playing musical instruments, practicing how to write letters, etc. You may be asked. Further, the present invention may be used for technical learning in the industrial field such as wiring work and welding work.

Abstract

The purpose of the present invention is to provide a learning assist system with which it is possible for a learner to efficiently learn technology. The learning assist system comprises: a display unit 19 fitted to a learner 102; an imaging unit 15 fitted to the learner 102, for imaging the visual field video of the learner 102; a storage unit for storing an example moving-image that is the moving-image of work motion of an instructor that serves as an example of motion of the learner 102; and an arithmetic unit 16. The arithmetic unit 16 causes the example moving-image to be displayed by the display unit 19 in superimposition on the visual field video imaged by the imaging unit 15, and causes the display content of the example moving-image to be dynamically changed in accordance with the feature of work motion of the learner 102 included in the visual field video.

Description

学習支援システム、学習支援装置及びプログラムLearning support system, learning support device and program
 本発明は、技術を学習する際に使用される学習支援システム、学習支援装置及びプログラムに関する。 The present invention relates to a learning support system, a learning support device, and a program used when learning a technique.
 従来、様々な技術分野における技術の伝承は、指導者の指導のもとで技術の学習者が実際に訓練や作業を行うことで行われてきた。 Traditionally, the transmission of technology in various technical fields has been carried out by actual training and work by technical learners under the guidance of instructors.
 高度な技術を学習者に伝えるには、言葉では表現しにくいコツを伝えることが必要な場合があるが、指導者にとって、そのコツを効率よく学習者に伝えることは容易ではない。近年、指導者が取得した技術を、学習者に効率よく伝えるために、指導者の技術をデータ化して保存し、保存した技術データを学習者に表示する装置や方法が提案されている(例えば特許文献1)。 ❖ In order to convey advanced skills to learners, it may be necessary to convey tips that are difficult to express in words, but it is not easy for teachers to convey those tips to learners efficiently. In recent years, in order to efficiently convey the technique acquired by the instructor to the learner, there has been proposed an apparatus or method for converting the technique of the instructor into data and storing it, and displaying the stored technical data to the learner (for example, Patent Document 1).
 特許文献1には、様々な環境下において、指導者の手の位置、移動量、移動速度など、指導者の動きを抽出して保存する方法が開示されている。この方法では、環境を変えながら指導者に毎度同じ作業を行ってもらい、指導者の動作をその都度モーションキャプチャ等により測定することで、環境の変化と動作の変化とを関連付けて保存している。 Patent Document 1 discloses a method of extracting and saving the movement of the instructor such as the position, movement amount, and moving speed of the instructor's hand under various environments. With this method, the instructor is asked to perform the same work each time while changing the environment, and the instructor's movements are measured each time by motion capture, etc., and the changes in the environment and movements are stored in association with each other. ..
 また、測定した指導者の動作情報を画像化し、その画像を、メガネ型のウエアラブルデバイス(スマートグラス)などを用いた拡張現実(AR)技術により現実の作業風景に重ねて表示する技術も提案されている(例えば特許文献2)。 In addition, a technology has been proposed in which the measured motion information of the instructor is imaged, and the image is superimposed on the actual work scene by augmented reality (AR) technology using a glasses-type wearable device (smart glasses) or the like. (For example, Patent Document 2).
 例えば特許文献2の作業支援装置では、情報処理部が、撮像部により測定された作業対象となる物体の表面形状と、予め保存されている物体の表面形状とを比較し、作業対象物体の各点において熱変形させるべき量を算出した後、その変形量を得るために加熱する位置と加熱量を算出し、加熱する位置を示す画像を作業対象物体の映像上に重ね合わせてヘッドマウントディスプレイに表示している。 For example, in the work support device of Patent Document 2, the information processing unit compares the surface shape of the object to be worked measured by the imaging unit with the surface shape of the object stored in advance, and determines each of the work target objects. After calculating the amount to be thermally deformed at the point, calculate the heating position and the heating amount to obtain the deformation amount, and superimpose the image showing the heating position on the image of the work target object on the head mounted display. it's shown.
特開2003-281287号公報JP-A-2003-281287 特開2009-69954号公報JP, 2009-69954, A
 特許文献2など、データ化された指導者の技術を学習者に表示する従来の学習支援システムは、習熟度の異なる学習者に対しても、利き手など特徴の異なる学習者に対しても、一様に技術を表示する。このようなシステムでは、習熟度や特徴の異なる学習者が効率よく学習できない場合がある。 The conventional learning support system that displays the technology of the instructor, which has been converted into data, to the learner, such as Patent Document 2, is suitable for both learners having different proficiency levels and learners having different characteristics such as the dominant hand. To display the technology. In such a system, learners with different proficiency levels and characteristics may not be able to learn efficiently.
 本発明は、学習者が、技術を効率よく習得できる学習支援システムを提供することを目的とする。 The present invention aims to provide a learning support system that enables a learner to efficiently learn a technique.
 上記課題を解決するため、本発明は、学習者に装着される表示部と、学習者に装着されて学習者の視野映像を撮像する撮像部と、学習者の動作のお手本となる指導者の作業動作の動画であるお手本動画を格納する格納部と、演算部とを備えている。演算部は、お手本動画を、撮像部が撮像した視野映像上に重ね合わせて表示部に表示させるとともに、視野映像に含まれる学習者の作業動作の特徴に応じて、動的にお手本動画の表示内容を変化させる。 In order to solve the above problems, the present invention provides a display unit mounted on a learner, an imaging unit mounted on the learner to capture a visual field image of the learner, and an instructor serving as a model for the learner's operation. A storage unit that stores a model moving image, which is a moving image of a work operation, and a calculation unit are provided. The calculation unit superimposes the model video on the visual field image captured by the imaging unit and displays it on the display unit, and dynamically displays the model video according to the characteristics of the learner's work operation included in the visual field image. Change the content.
 本発明によれば、表示されるお手本動画が学習者の作業動作に合わせて動的に変化するため、学習者は、それぞれにあったペースで技術を習得する訓練を行うことができる。これにより学習者は、技術を効率よく習得できる。 According to the present invention, since the displayed model video dynamically changes according to the learner's work operation, the learner can perform training to learn the technique at a pace suitable for each. This allows the learner to efficiently acquire the technique.
第1実施形態の学習支援システムの構成を示す説明図。Explanatory drawing which shows the structure of the learning support system of 1st Embodiment. 第1実施形態の学習支援システムの制御システム101の構成を示すブロック図。The block diagram which shows the structure of the control system 101 of the learning support system of 1st Embodiment. 第1実施形態の学習訓練の流れを示すフローチャート。The flowchart which shows the flow of the learning training of 1st Embodiment. 表示部19に位置の差分が表示される場合の画面例。The example of a screen in case the difference of a position is displayed on the display part 19. 表示部19に位置の差分が表示される場合の画面例。The example of a screen in case the difference of a position is displayed on the display part 19. 位置の差分の表示例。A display example of the position difference. 位置の差分の表示例。A display example of the position difference. 位置の差分の表示例。A display example of the position difference. 学習者と指導者との動作要素の差分を示す表示例。The display example which shows the difference of the operation element of a learner and an instructor. 学習者と指導者との圧力の差分を示す表示例。A display example showing a difference in pressure between a learner and a teacher. メガネ型デバイス201の構成の詳細を示すブロック図。FIG. 3 is a block diagram showing details of the configuration of the glasses-type device 201. 差分が減った動作要素の表示変更例。Example of display change of operation element with reduced difference. 学習支援システムの制御システム101の構成例を示すブロック図。The block diagram which shows the structural example of the control system 101 of a learning support system. 第1実施形態の学習支援システム全体の流れを示すフローチャート。The flowchart which shows the flow of the whole learning support system of 1st Embodiment. 制御システム101、メガネ型デバイス201およびセンシングデバイス204の入出力シーケンスを示す図。The figure which shows the input / output sequence of the control system 101, the glasses-type device 201, and the sensing device 204. (a)、(b)基本情報と訓練の種類を入力する際に表示部に表示されるUI例。(A), (b) UI example displayed on the display when inputting basic information and training type. 差分判定と習熟度評価と振り返りの説明図。Explanatory drawing of difference judgment, proficiency level evaluation, and reflection. 情報格納部17に格納されるログの例。An example of a log stored in the information storage unit 17. レベル判定モード中の流れを示すフローチャート。A flowchart showing the flow in the level judgment mode. 第2実施形態の学習支援システムの構成を示す説明図。Explanatory drawing which shows the structure of the learning support system of 2nd Embodiment. 指導者が装着するメガネ型デバイスの構成を示すブロック図。FIG. 3 is a block diagram showing a configuration of a glasses-type device worn by a leader. 第2実施形態の学習支援システムの制御システム101の構成を示すブロック図。The block diagram which shows the structure of the control system 101 of the learning support system of 2nd Embodiment. 第2実施形態の学習支援システムの動作の流れを示すフローチャート。The flowchart which shows the flow of operation|movement of the learning support system of 2nd Embodiment. 第3実施形態の学習支援システムの構成を示すブロック図。The block diagram which shows the structure of the learning support system of 3rd Embodiment. 第3実施形態の学習支援システムの動作の流れを示すフローチャート。The flowchart which shows the flow of operation|movement of the learning assistance system of 3rd Embodiment. 第3実施形態の変形例の学習支援システムの動作の流れを示すフローチャート。The flowchart which shows the operation flow of the learning support system of the modification of 3rd Embodiment. 第4実施形態の学習支援システムの構成を示す説明図。Explanatory drawing which shows the structure of the learning support system of 4th Embodiment. 第4実施形態の学習支援システムの表示部19の表示例。The display example of the display part 19 of the learning support system of 4th Embodiment.
 以下、本発明の実施の形態にかかる学習支援システムについて説明する。 The learning support system according to the embodiment of the present invention will be described below.
 <<第1実施形態の学習支援システム>>
 第1実施形態の学習支援システムとしては、学習者が超音波撮像装置におけるプローブの操作方法を学習するメガネ型デバイス用のシステムの例について説明する。この学習支援システムは、図1に示すように、学習者102に装着されるメガネ型デバイス201と、メガネ型デバイス201にネットワーク(一例として無線LAN(Local Area Network))30を介して接続されメガネ型デバイス201の機能を制御する制御システム101とを備えている。メガネ型デバイス201は、表示部19と、学習者102の視野映像を撮像する撮像部(カメラ)15と、を備えている。
<<Learning Support System of First Embodiment>>
As the learning support system of the first embodiment, an example of a system for eyeglass-type devices in which a learner learns how to operate a probe in an ultrasonic imaging apparatus will be described. As shown in FIG. 1, this learning support system includes glasses-type device 201 worn by learner 102 and glasses connected to glasses-type device 201 via a network (a wireless LAN (Local Area Network) as an example) 30. And a control system 101 for controlling the function of the molding device 201. The glasses-type device 201 includes a display unit 19 and an imaging unit (camera) 15 that captures a field image of the learner 102.
 制御システム101は、図2に示すように、メガネ型デバイス201から受け取った情報に基づいてこの学習支援システムにおける各種演算を行う演算部16と、各種情報を格納している情報格納部17と、を備えている。演算部16は、学習者102の作業動作のお手本となる指導者の作業動作の動画であるお手本動画(例えば図4参照)を、カメラ15が撮像した視野映像上に重ね合わせて表示部19に表示させるとともに、視野映像に含まれる学習者102の作業動作の特徴に応じて、動的にお手本動画の表示内容を変化させる。情報格納部17には、後で詳しく説明するが、お手本動画や指導者の手の位置情報などを含むお手本データが、少なくとも格納されている。 As shown in FIG. 2, the control system 101 includes a computing unit 16 that performs various computations in this learning support system based on information received from the glasses-type device 201, an information storage unit 17 that stores various information, Equipped with. The arithmetic unit 16 superimposes a model video (see, for example, FIG. 4), which is a video of the work motion of the instructor, which serves as a model of the work motion of the learner 102, on the visual field image captured by the camera 15, and displays it on the display unit 19. In addition to the display, the display content of the model moving image is dynamically changed according to the characteristics of the work operation of the learner 102 included in the visual field image. As will be described in detail later, the information storage unit 17 stores at least model data including a model video and position information of the instructor's hand.
 演算部16は情報格納部17に予め格納された学習支援プログラムを実行することにより種々の演算を実行する。具体的には演算部16は、カメラ15から学習者102の視野映像を取り込む映像取り込み部161と、学習者102の視野映像に、お手本動画を重畳し、重畳した処理画像を表示部19に出力する重畳表示部162と、お手本動画に含まれる指導者の作業動作と視野映像に含まれる学習者102の作業動作との差分を算出する差分算出部163と、差分算出部163により求められた差分が所定の閾値以上か否かを判定する差分判定部16Aと、差分算出部163が算出した差分に基づいて表示するお手本動画の表示内容を変更する表示調整部16Cとを備えている。 The calculation unit 16 executes various calculations by executing the learning support program stored in advance in the information storage unit 17. Specifically, the calculation unit 16 superimposes a model moving image on the image capture unit 161 that captures the field of view image of the learner 102 from the camera 15 and the field of view image of the learner 102, and outputs the superimposed processed image to the display unit 19. The superimposition display unit 162, the difference calculation unit 163 that calculates the difference between the work motion of the instructor included in the model video and the work motion of the learner 102 included in the visual field image, and the difference calculated by the difference calculation unit 163. It is provided with a difference determination unit 16A for determining whether or not is equal to or greater than a predetermined threshold, and a display adjustment unit 16C for changing the display content of the model moving image displayed based on the difference calculated by the difference calculation unit 163.
 第1実施形態の学習支援システムによる訓練動作の流れを図3を参照して説明する。このフローでは、学習者102がプローブ202を手に持った状態で訓練を行う場合の作業動作について説明する。 The flow of the training operation by the learning support system of the first embodiment will be described with reference to FIG. In this flow, the work operation when the learner 102 carries out the training while holding the probe 202 in his/her hand will be described.
 図3のフローの訓練動作は、制御システム101により制御され実行される。まずカメラ15が学習者102の視野映像を撮像する。映像取り込み部161は、ネットワーク30を介してカメラ15により撮像された学習者の視野映像を取り込む(ステップS41)。次に重畳表示部162は、情報格納部17に格納されているお手本画像を読み出して、学習者の視野映像に含まれる学習者の手とプローブ202の像に、お手本動画に含まれる指導者の手とプローブの予め定めた基準点が重なるように位置を調整し、位置を調整したお手本動画を表示部19に表示させる(ステップS42)。次に差分算出部163が、図4に示すように、学習者が持つプローブ202の位置と、指導者が持つプローブの位置の差分を算出する。具体的には、まず差分算出部163は、学習者の視野映像から、学習者が持つプローブ202の検査対象の表面に触れる面の中心202bの位置を算出する。また差分算出部163は、情報格納部17に格納されているお手本データから、指導者が持つプローブが検査対象に触れる面の中心202cの位置を読み出す。そして差分算出部163は、指導者の持つプローブの中心202cの位置と学習者の持つプローブ202の中心202bの位置の差分を算出する(ステップS43)。 The flow training operation of FIG. 3 is controlled and executed by the control system 101. First, the camera 15 captures the visual field image of the learner 102. The image capturing unit 161 captures the learner's view image captured by the camera 15 via the network 30 (step S41). Next, the superimposition display unit 162 reads out the model image stored in the information storage unit 17, and the images of the learner's hands and the probe 202 included in the learner's visual field image show the image of the instructor included in the model video. The positions are adjusted such that the predetermined reference points of the hand and the probe overlap, and the model moving image with the adjusted positions is displayed on the display unit 19 (step S42). Next, the difference calculation unit 163 calculates the difference between the position of the probe 202 held by the learner and the position of the probe held by the instructor, as shown in FIG. Specifically, first, the difference calculation unit 163 calculates the position of the center 202b of the surface of the learner's probe 202 that comes into contact with the surface of the inspection target from the learner's visual field image. Further, the difference calculation unit 163 reads the position of the center 202c of the surface of the probe held by the instructor that touches the inspection target from the model data stored in the information storage unit 17. Then, the difference calculation unit 163 calculates the difference between the position of the center 202c of the probe held by the instructor and the position of the center 202b of the probe 202 held by the learner (step S43).
 次に差分判定部16Aは、ステップS43で差分算出部163が求めた差分が、所定の閾値以上であるか否かを判定し、差分が閾値以上の場合、その結果(差分判定結果)を表示調整部16Cに出力する(ステップS44)。表示調整部16Cは、お手本動画のプローブの中心202bの位置を、学習者のもつプローブ202の中心202cの位置にずらす画像処理を行い、処理したお手本動画を、ネットワーク30を介してメガネ型デバイス201に出力する。メガネ型デバイス201が受け取ったお手本動画は、表示部19に表示される(ステップS45)。 Next, the difference determination unit 16A determines whether or not the difference obtained by the difference calculation unit 163 in step S43 is equal to or greater than a predetermined threshold value, and if the difference is equal to or greater than the threshold value, the result (difference determination result) is displayed. The data is output to the adjusting unit 16C (step S44). The display adjustment unit 16C performs image processing to shift the position of the probe center 202b of the model video to the position of the center 202c of the probe 202 of the learner, and processes the processed model video via the network 30 to the glasses-type device 201. Output to. The model moving image received by the glasses-type device 201 is displayed on the display unit 19 (step S45).
 これにより、学習者は、指導者と学習者のプローブの位置が重なった映像を見ることができるため、手やプローブの向きや角度を指導者の手やプローブの向きや角度に合わせることができ、効率よく手やプローブの動かし方を学習することができる。 This allows the learner to see an image in which the positions of the instructor and the probe of the instructor overlap, so that the orientation and angle of the hand and probe can be adjusted to the orientation and angle of the instructor's hand and probe. , You can efficiently learn how to move your hands and probes.
 ステップS45で処理したお手本動画が学習者に表示されたら、フローはステップS41に戻る。学習支援システムは以上のフローを学習者の訓練が終了するまで繰り返す。また、ステップS44で差分算出部163が算出した差分が閾値未満の場合、フローはステップS41に戻る。 When the model video processed in step S45 is displayed to the learner, the flow returns to step S41. The learning support system repeats the above flow until the learner's training is completed. If the difference calculated by the difference calculation unit 163 in step S44 is less than the threshold value, the flow returns to step S41.
 なお、表示調整部16Cは、学習者のプローブ202の中心202bと、指導者のプローブの中心202cとを基準点として、学習者の視野映像の手の位置にお手本動画の手の位置を揃えたが、他の基準点を用いてもよい。例えば表示調整部16Cは、学習者102の指先の位置とお手本動画の指導者の指先の位置を揃えるなど、学習者の身体の一部の位置を検出してお手本動画の表示位置の調整を行ってもよい。 The display adjustment unit 16C aligns the hand position of the model video with the hand position of the learner's visual field image with the center 202b of the learner's probe 202 and the center 202c of the instructor's probe as reference points. However, other reference points may be used. For example, the display adjustment unit 16C adjusts the display position of the model video by detecting the position of a part of the body of the learner, such as aligning the position of the fingertip of the learner 102 with the position of the fingertip of the instructor of the model video. May be.
[位置の差分を学習者に知らせる表示]
 以下、差分判定部16Aが算出した学習者と指導者との位置の差分が閾値以上である場合に、表示調整部16Cが行う別の表示調整例について説明する。
[Display to inform learner of position difference]
Hereinafter, another display adjustment example performed by the display adjustment unit 16C when the difference between the positions of the learner and the teacher calculated by the difference determination unit 16A is equal to or larger than the threshold will be described.
 表示調整部16Cは、学習者の持つプローブ202の中心202bと、指導者の持つプローブの中心202cとの位置の差分を図4に示すように、矢印102dにより表示することができる。学習者は、矢印102dの大きさや傾きを見て、自分の動作がお手本からどれだけずれているかを認識することができる。また学習者は、矢印102dを見ながら、自分が持つプローブ202の位置を指導者が持つプローブの位置に容易に近づけることができる。 The display adjustment unit 16C can display the difference in position between the center 202b of the probe 202 held by the learner and the center 202c of the probe held by the instructor by the arrow 102d as shown in FIG. The learner can recognize how much his or her movement deviates from the model by looking at the size and inclination of the arrow 102d. Further, the learner can easily bring the position of the probe 202 held by the learner closer to the position of the probe held by the instructor while looking at the arrow 102d.
 また、指導者のヒジの位置がお手本動画に含まれ、ヒジの位置情報が情報格納部17に格納されている構成としてもよい。この場合、差分算出部163は、学習者の視野映像から学習者のヒジの位置を検出し、学習者のヒジの位置と指導者のヒジの位置の差分を算出することができる。学習者のヒジの位置と指導者のヒジの位置との差分が閾値以上である場合、表示調整部16Cは、図5に示すように、「もっとヒジを持ち上げて!」など、学習者がどのように身体を動かしたらよいかを具体的に文章で表示することができる。また表示調整部16Cは、動かすべき身体の部位に矢印19bなどのサインを表示してヒントを表示してもよい。 Also, the position of the instructor's elbow may be included in the model video, and the position information of the elbow may be stored in the information storage unit 17. In this case, the difference calculation unit 163 can detect the position of the learner's elbow from the view image of the learner and calculate the difference between the position of the learner's elbow and the position of the instructor's elbow. When the difference between the position of the learner's elbow and the position of the instructor's elbow is equal to or greater than the threshold value, the display adjustment unit 16C causes the learner to select, for example, “lift the elbow more!” as shown in FIG. It is possible to specifically display in sentences whether or not to move the body. The display adjustment unit 16C may display a hint by displaying a sign such as the arrow 19b on the body part to be moved.
 また、情報格納部17に指導者の手102bの骨と関節の位置を示す骨格図19cを格納しておく構成としてもよい。この場合、図6に示すように、表示調整部16Cは、指導者の手102bの映像に骨格図19cを重ねて表示することができる。骨格図19cを表示することにより、学習者は、表示された指導者の手102bの関節の位置を見ながら自分の手の位置を容易に修正することができる。 Further, the information storage unit 17 may be configured to store a skeleton diagram 19c showing the positions of the bones and joints of the instructor's hand 102b. In this case, as shown in FIG. 6, the display adjustment unit 16C can display the skeleton diagram 19c in an overlapping manner on the image of the instructor's hand 102b. By displaying the skeleton diagram 19c, the learner can easily correct the position of his or her hand while observing the position of the joint of the instructor's hand 102b displayed.
 さらに情報格納部17に、図7に示すように、作業する手や腕の形をしていて、作業の各時点で手の部位ごとに加えるべき力の大きさを色で示した色分布図19dを格納しておく構成としてもよい。この場合、差分算出部163が学習者の視野映像から学習者の手や腕の位置や形状を検出し、表示調整部16Cは、色分布図19dを学習者の手102cに重ねて表示することができる。例えばサーモグラフィのように、学習者が筋肉に大きく力を加えるべき身体の部位には暖色系の色を重ね、少しだけ力を加えるべき身体の部位には寒色系の色を重ねて表示する等により、学習者に力の入れ具合を認識させることができる。 Further, as shown in FIG. 7, the information storage unit 17 has the shape of a hand or arm to be worked, and a color distribution diagram showing the magnitude of the force to be applied to each part of the hand at each time point of the work in color. 19d may be stored. In this case, the difference calculation unit 163 detects the position and shape of the learner's hand and arm from the learner's visual field image, and the display adjustment unit 16C displays the color distribution map 19d on the learner's hand 102c in an overlapping manner. You can For example, such as thermography, warm colors are superimposed on the parts of the body where the learner should apply a large amount of force to the muscles, and cool colors are superimposed on the parts of the body where the learner should apply a small amount of force. , It is possible to let the learner recognize the level of effort.
 また、表示調整部16Cは、図8に示すように、指導者のプローブと学習者が持つプローブ202の位置が重なっている領域と位置がずれている領域を検出し、重なっている領域を白色に表示し、位置ずれが生じている領域については、指導者のプローブとプローブ202とを異なる色に着色して表示することもできる。このように学習者と指導者との位置がずれている領域を、学習者と指導者とで色を変えて表示することができるため、学習者と指導者との位置の差分が強調される。よって学習者は、自分の手を動かすべき方向を容易に認識できる。 Further, as shown in FIG. 8, the display adjusting unit 16C detects a region where the positions of the probe 202 of the instructor and the probe 202 of the learner are overlapped with each other, and the overlapping region is white. The probe of the instructor and the probe 202 can be displayed in different colors for the region where the displacement occurs. In this way, the region where the position of the learner and the instructor are displaced can be displayed in different colors between the learner and the instructor, so that the difference between the positions of the learner and the instructor is emphasized. .. Therefore, the learner can easily recognize the direction in which his/her hand should be moved.
 また、表示調整部16Cは、学習者の持つプローブ202の位置が指導者の持つプローブの位置からずれた場合、位置ずれが始まったタイミングまでお手本動画を巻き戻し、再生する構成としてもよい。このとき再び流すお手本動画を短い時間のセグメントに分けて、セグメント毎にお手本動画を繰り返し再生してもよい。これにより、学習者が苦手な作業動作を何回も練習できるため、学習効果を高めることができる。 Further, when the position of the probe 202 held by the learner deviates from the position of the probe held by the instructor, the display adjustment unit 16C may be configured to rewind and reproduce the model video until the timing when the position shift starts. At this time, the model moving image to be played again may be divided into short time segments and the model moving image may be repeatedly reproduced for each segment. As a result, the learner can practice the work motions he/she is not good at many times, so that the learning effect can be enhanced.
 なお、メガネ型デバイス201には振動センサが備えられていてもよい。その場合、学習者の持つプローブ202の位置と指導者の持つプローブの位置がずれたタイミングで、メガネ型デバイス201に搭載された振動センサにより振動や衝撃などのフォースフィードバックを学習者に与えるようにしてもよい。 Note that the glasses-type device 201 may be equipped with a vibration sensor. In that case, when the position of the probe 202 held by the learner and the position of the probe held by the instructor deviate from each other, the vibration sensor mounted on the glasses-type device 201 gives the learner force feedback such as vibration and impact. May be.
 以上のように表示内容を変化させることで、位置ずれを学習者に知らせることができるため、学習者は正しいプローブや手の位置を把握しやすく、かつ位置ずれを修正しやすくなる。 By changing the display contents as described above, it is possible to notify the learner of the positional deviation, so that the learner can easily grasp the correct probe or hand position and correct the positional deviation.
 [表示調整の種類]
<位置以外の動作要素について>
 以上の説明では、差分算出部163が学習者と指導者のプローブまたは手の位置の差分(動作要素)を算出する例について説明したが、差分算出部163は、学習者と指導者との作業動作の差分を算出する動作要素として、「位置」以外にも「スピード、圧力、動作軌跡、注視点、音」など1以上の動作要素を検出してもよい。指導者のこれらの動作要素は、お手本データに含まれ、情報格納部17に格納されている。各動作要素における学習者と指導者との差分の算出方法と、その差分の表示方法について、図9を参照して説明する。
[Display adjustment type]
<Operating elements other than position>
In the above description, an example in which the difference calculation unit 163 calculates the difference (motion element) between the position of the learner and the teacher's probe or hand is explained, but the difference calculation unit 163 performs the work between the learner and the teacher. As the motion element for calculating the motion difference, one or more motion elements such as “speed, pressure, motion locus, gazing point, sound” may be detected in addition to “position”. These motion elements of the instructor are included in the model data and stored in the information storage unit 17. A method of calculating the difference between the learner and the instructor in each motion element and a method of displaying the difference will be described with reference to FIG.
 ・スピード
 カメラ15が撮像した映像から、差分算出部163は、学習者が手を動かすスピードを算出し、算出した学習者の手のスピードと、情報格納部17に格納されている指導者の手の動くスピードとの差分を学習者と指導者の手の位置の差から算出する。差分判定部16Aが学習者のスピードと指導者のスピードとの差分が所定値以上であると判定した場合、表示調整部16Cは、お手本動画の再生スピードを学習者のスピードに合わせるように調整する。例えば学習者102の作業動作が所定値以上に指導者の作業動作よりも遅い場合、表示調整部16Cは、学習者102のスピードに合うように、お手本動画の再生スピードを遅くする。学習者と指導者のスピードはそれぞれ、表示する画像の色温度の違いで表示したり、バロメータや数値などによって表示したりして、学習者に自分のスピードと指導者のスピードの差分を認識させるようにしてもよい。
The difference calculation unit 163 calculates the speed at which the learner moves the hand from the image captured by the speed camera 15, and the calculated speed of the learner's hand and the hand of the instructor stored in the information storage unit 17. The difference from the moving speed of is calculated from the difference in the position of the hands of the learner and the instructor. When the difference determination unit 16A determines that the difference between the learner's speed and the instructor's speed is greater than or equal to a predetermined value, the display adjustment unit 16C adjusts the reproduction speed of the model video to match the learner's speed. .. For example, when the work operation of the learner 102 is slower than the predetermined value by more than the work operation of the instructor, the display adjustment unit 16C slows down the reproduction speed of the model moving image so as to match the speed of the learner 102. The speed of the learner and the speed of the instructor are displayed by the difference in the color temperature of the displayed image, or by the barometer or numerical value, so that the learner can recognize the difference between his speed and the speed of the instructor. You may do so.
 ・圧力
 指導者が検査対象にプローブを押し当てる際の圧力(以下、指導者の圧力という)情報を、情報格納部17に予め格納しておく。重畳表示部162は情報格納部17から指導者の圧力情報を読み出し、その圧力の大きさを径の大きさで示す円103b2を、図9に示すように、学習者の視野映像に重畳表示させる。また差分算出部163は、学習者が検査対象にプローブ202を押し当てる際の手、指先、または爪の色の変化から、学習者が検査対象にプローブ202を押し当てている圧力を算出する。差分判定部16Aが、学習者のプローブ202を押し当てる圧力と指導者の圧力との差分が所定値以上であると判定した場合、表示調整部16Cは、指導者の圧力と学習者の圧力の差分を例えば円の大きさ、円の色の違いなどにより表示する。指導者の圧力と学習者の圧力の差は、例えば図10に示すように学習者102による圧力を示す円103b1と、指導者による圧力を示す円103b2とで囲まれた領域を塗りつぶすことにより強調表示することができる。学習者は、この円の塗りつぶされた領域を小さくするようにプローブを動かすことで、指導者の圧力に近い圧力で検査対象を押し当てる訓練を行うことができる。
-Pressure Information about the pressure when the instructor presses the probe against the inspection target (hereinafter referred to as the instructor's pressure) is stored in the information storage unit 17 in advance. The superposition display unit 162 reads out the pressure information of the instructor from the information storage unit 17, and a circle 103b2 indicating the magnitude of the pressure in terms of the diameter is superposed and displayed on the visual field image of the learner as shown in FIG. .. Further, the difference calculation unit 163 calculates the pressure with which the learner presses the probe 202 on the inspection target from the change in the color of the hand, fingertip, or nail when the learner presses the probe 202 on the inspection target. When the difference determination unit 16A determines that the difference between the pressure of the learner's probe 202 and the pressure of the instructor is equal to or greater than a predetermined value, the display adjustment unit 16C determines whether the pressure of the instructor and the pressure of the instructor. The difference is displayed by, for example, the size of the circle and the difference in the color of the circle. The difference between the pressure of the instructor and the pressure of the learner is emphasized by filling the area surrounded by the circle 103b1 indicating the pressure by the learner 102 and the circle 103b2 indicating the pressure by the instructor as shown in FIG. Can be displayed. By moving the probe so as to reduce the filled area of this circle, the learner can be trained to press the test object with a pressure close to the pressure of the instructor.
 ・動作軌跡
 指導者がプローブを動かす経路(動作軌跡)の情報を情報格納部17に予め格納しておく。重畳表示部162は、情報格納部17から動作軌跡の情報を読み出し、その動作軌跡を示す動線を、学習者の視野映像に重畳表示させる。その際、指導者の動作軌跡の色を図9に示すように、学習者のプローブ202の今ある位置よりも過去の軌跡(過去の動作軌跡)103cか、未来の軌跡(将来の動作軌跡)103dかで異なる色にして表示してもよい。また差分算出部163は、カメラ15により撮像された学習者の視野映像から、学習者のプローブ202の動作軌跡を算出し、学習者のプローブ202の動作軌跡と指導者のプローブの動作軌跡との差分を求める。差分判定部16Aが、差分が所定値以上であると判定した場合、表示調整部16Cは、すでに表示されている指導者のプローブの動作軌跡103c、103dに、学習者のプローブ202の動作軌跡を示す動線を重畳表示させることで指導者の動作軌跡と学習者の動作軌跡の差分を、学習者に認識させてもよい。
-Operation locus Information on the path (operation locus) for the instructor to move the probe is stored in advance in the information storage unit 17. The superimposition display unit 162 reads out the information on the motion locus from the information storage unit 17, and superimposes and displays the flow line indicating the motion locus on the field image of the learner. At that time, as shown in FIG. 9, the color of the motion locus of the instructor is a past locus (past motion locus) 103c or a future locus (future motion locus) relative to the current position of the learner's probe 202. Different colors may be displayed depending on 103d. In addition, the difference calculation unit 163 calculates the movement locus of the probe 202 of the learner from the visual field image of the learner captured by the camera 15, and calculates the movement locus of the probe 202 of the learner and the movement locus of the probe of the instructor. Find the difference. When the difference determination unit 16A determines that the difference is equal to or larger than the predetermined value, the display adjustment unit 16C sets the motion locus of the learner's probe 202 to the motion loci 103c and 103d of the instructor's probe that are already displayed. The learner may be made to recognize the difference between the motion trajectory of the instructor and the motion trajectory of the learner by superimposing and displaying the flow line shown.
 ・注視点
 作業中に指導者が注目している点(注視点)の情報を情報格納部17に予め格納しておく。重畳表示部162は、情報格納部17から注視点情報を読み出し、その注視点を示す点103aを、学習者の視野映像に重畳表示させる。メガネ型デバイス201は後述する図11のように視線センサ20を備えている。視線センサ20は学習者の注視点を検出する。差分算出部163は、学習者の注視点と、情報格納部17に格納されている指導者の注視点103aとの差分を算出する。差分判定部16Aが、学習者と指導者の注視点103aとの差分が閾値以上であると判定した場合、表示調整部16Cは、指導者の注視点103aを点滅などにより強調して表示させる。表示調整部16Cは指導者の注視点103a以外にも、視線センサ20により検出された学習者の注視点を、十字記号などを用いて表示部19に表示させてもよい。
-Gaze point Information of points (gaze point) that the instructor is paying attention to during work is stored in advance in the information storage unit 17. The superimposed display unit 162 reads the gazing point information from the information storage unit 17, and superimposes and displays the point 103a indicating the gazing point on the learner's visual field image. The glasses-type device 201 includes a line-of-sight sensor 20 as shown in FIG. 11, which will be described later. The line-of-sight sensor 20 detects the gazing point of the learner. The difference calculation unit 163 calculates the difference between the gazing point of the learner and the gazing point 103a of the instructor stored in the information storage unit 17. When the difference determination unit 16A determines that the difference between the learner and the gaze point 103a of the instructor is equal to or greater than the threshold value, the display adjustment unit 16C highlights the gaze point 103a of the instructor by blinking or the like. In addition to the gaze point 103a of the instructor, the display adjustment section 16C may display the gaze point of the learner detected by the line-of-sight sensor 20 on the display section 19 using a cross symbol or the like.
 ・音
 メガネ型デバイス201に、作業音を録音する録音機器が備えられていてもよい。この場合、録音機器は、学習者の作業音をその録音機器が取得し、差分算出部163が録音した学習者の作業音と、情報格納部17に格納されているお手本動画の作業音との差分を算出する。差分判定部16Aが学習者と指導者の作業音の差分が閾値以上であると判定した場合、表示調整部16Cは波形、色、文字などにより音がずれていることを表示させる。
-Sound The glasses-type device 201 may be provided with a recording device that records a work sound. In this case, the sound recording device acquires the work sound of the learner by the sound recording device and records the work sound of the learner recorded by the difference calculation unit 163 and the work sound of the model moving image stored in the information storage unit 17. Calculate the difference. When the difference determination unit 16A determines that the difference between the work sounds of the learner and the instructor is equal to or greater than the threshold value, the display adjustment unit 16C displays that the sounds are deviated due to the waveform, the color, the character, and the like.
 なお、各動作要素について、統計的に起こりやすいずれ方やその原因が、情報格納部17に予め格納されていてもよい。その場合、差分算出部163は、学習者の作業動作と指導者の作業動作とでずれの生じた動作要素について、学習者の作業動作と指導者の作業動作の違いの原因を求める構成としてもよい。また、表示調整部16Cは、各動作要素の統計的に起こりやすいずれ方やその原因を表示部19に表示させてもよい。 Note that for each action element, whichever is statistically likely to occur and the cause thereof may be stored in advance in the information storage unit 17. In that case, the difference calculation unit 163 may also be configured to obtain the cause of the difference between the work operation of the learner and the work operation of the instructor for the operation element in which the work operation of the learner and the work operation of the instructor are misaligned. Good. Further, the display adjustment unit 16C may cause the display unit 19 to display which of the action elements is statistically likely to occur and the cause thereof.
 [メガネ型デバイス201]
 ここで、メガネ型デバイス201の構成について具体的に説明する。メガネ型デバイス201は、図11に示すように、眼鏡と、眼鏡の一部に搭載された、カメラ15、表示部19、無線通信デバイス13、CPU12、メモリ18、および視線センサ20を備えて構成される。
[Glasses device 201]
Here, the configuration of the glasses-type device 201 will be specifically described. As shown in FIG. 11, the glasses-type device 201 includes glasses, a camera 15, a display unit 19, a wireless communication device 13, a CPU 12, a memory 18, and a line-of-sight sensor 20 mounted on a part of the glasses. Will be done.
 カメラ15は、ユーザの視線の方向等の所定の方向の映像を撮像し、またユーザの作業動作を計測する。 The camera 15 captures an image in a predetermined direction such as the direction of the user's line of sight, and measures the user's work motion.
 表示部19は、演算部16およびCPU12の制御下で表示調整部16Cが調整したお手本動画をユーザの視野内に投影する。 The display unit 19 projects the model moving image adjusted by the display adjustment unit 16C under the control of the calculation unit 16 and the CPU 12 into the visual field of the user.
 なお、表示部19としては、ユーザの網膜に映像を投影する構造のものを用いることも可能である。また眼鏡の代わりにウエアラブルなディスプレイを用いてもよい。この場合、カメラ15が撮像したユーザの視野映像と、表示調整部16Cが調整したお手本動画とが重畳されて、ウエアラブルなディスプレイに表示される。 The display unit 19 may have a structure that projects an image on the retina of the user. A wearable display may be used instead of the glasses. In this case, the visual field image of the user captured by the camera 15 and the model moving image adjusted by the display adjustment unit 16C are superimposed and displayed on the wearable display.
 無線通信デバイス13は、CPU12と、制御システム101との間の通信を行う。 The wireless communication device 13 communicates between the CPU 12 and the control system 101.
 CPU12は、制御システム101の演算部16から受け取った情報に基づいてメガネ型デバイス201の各部の動作の制御を行う。具体的には、カメラ15で撮像した映像データを、映像取り込み部161に送信するように無線通信デバイス13に指示を送ったり、重畳表示部162や表示調整部16Cから受け取った情報を表示部19に送信して表示する映像に反映させたりする。 The CPU 12 controls the operation of each part of the glasses-type device 201 based on the information received from the calculation unit 16 of the control system 101. Specifically, the wireless communication device 13 is instructed to transmit the video data captured by the camera 15 to the video capture unit 161 and the information received from the superimposed display unit 162 and the display adjustment unit 16C is displayed in the display unit 19. Or send it to and display it on the video.
 メモリ18には、カメラ15の撮像した映像やCPU12の演算結果が必要に応じて格納される。また、メモリ18には、表示部19に表示する映像を格納することもできる。 The image captured by the camera 15 and the calculation result of the CPU 12 are stored in the memory 18 as needed. Further, the memory 18 can also store the image to be displayed on the display unit 19.
 視線センサ20は、ユーザの注視点がどこにあるかを検出するセンサである。視線センサ20は、検出した注視点のデータをCPU12に送信する。 The line-of-sight sensor 20 is a sensor that detects where the user's gazing point is. The line-of-sight sensor 20 transmits the detected gazing point data to the CPU 12.
 [訓練モード]
 以上の説明では、表示調整部16Cが、随時、指導者の持つプローブの位置を学習者の持つプローブ202の位置に合わせるものとして説明した。情報格納部17には、随時指導者の手の位置合わせを行い学習者のペースに合わせた訓練を可能とした基礎訓練用のビギナーモードと、差分が所定値を超えたときだけ指導者の手の位置が修正されるノーマルモードなど、複数の訓練モード(表1)が格納されていてもよい。ここで、情報格納部17に格納される訓練モードについて説明する。
[Training mode]
In the above description, the display adjustment unit 16C is described as adjusting the position of the probe held by the instructor to the position of the probe 202 held by the learner at any time. The information storage unit 17 includes a beginner mode for basic training that enables the trainer's hand to be aligned with the learner's pace at any time, and the trainer's hand only when the difference exceeds a predetermined value. A plurality of training modes (Table 1) may be stored, such as a normal mode in which the position of is corrected. Here, the training mode stored in the information storage unit 17 will be described.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 ビギナーモードでは、お手本動画の再生中、お手本動画が学習者102の作業動作に追随し、お手本動画の重畳位置や動作スピードが動的に変更される。このモードでは、お手本動画が学習者の動作ペースに合わせて調整されるため、学習者は、指導者の作業をつぶさに観察し、正しい作業の型を覚えることができる。 In the beginner mode, the model video follows the work operation of the learner 102 during the reproduction of the model video, and the superimposed position and operation speed of the model video are dynamically changed. In this mode, since the model video is adjusted according to the learner's operation pace, the learner can closely observe the work of the instructor and learn the correct work pattern.
 ノーマルモードは、指導者の作業のスピード感を重視してお手本動画を再生し、学習者がそれに慣れながら作業の質を向上できるようにした訓練モードである。このモードでは、基本的にはお手本動画は指導者の作業スピードで再生され、学習者102の作業動作が一定以上遅れた場合のみ、お手本動画の重畳位置や再生スピードが学習者102に合うようにコントロールされる。 Normal mode is a training mode that emphasizes the speed of the instructor's work and plays the model video so that the learner can improve the quality of the work while getting used to it. In this mode, the model video is basically played at the work speed of the instructor, and only when the work operation of the learner 102 is delayed by a certain amount or more, the superimposed position and the playback speed of the model video match the learner 102. Controlled.
 各訓練モードにおける処理は図3で示したフローの通りであり、各訓練モードで表示調整部16Cは、差分判定部16Aによる差分判定結果に応じて、お手本動画の表示内容を、学習者102の作業動作に合わせて動的に変化させる。具体的には、表示調整部16Cは、差分判定結果に基づいて、差分が所定値以上の動作要素を、差分が所定値に満たない動作要素よりも学習者102に認知しやすい形で表示させるように、お手本動画の各動作要素の透過率を動的に変更したり、表示の有無を動的に切り替えたりする。 The processing in each training mode is as shown in the flow shown in FIG. 3, and in each training mode, the display adjustment unit 16C displays the display content of the model video according to the difference determination result by the difference determination unit 16A of the learner 102. It is changed dynamically according to the work operation. Specifically, the display adjustment unit 16C causes the learner 102 to display, based on the difference determination result, an action element whose difference is equal to or greater than a predetermined value in a manner that the learner 102 can easily recognize it than an action element whose difference is less than the predetermined value. As described above, the transmittance of each operation element of the model video is dynamically changed, and the presence/absence of display is dynamically switched.
 具体的には、例えば図12(a)に示すように指導者の圧力を示す円103b2、指導者の注視点103a、動作軌跡103c、103dが表示されており、学習者の圧力と注視点が指導者の圧力と注視点に対して差分が少なくなるように変化していった場合、学習者が適切な圧力や注視点で作業を行っているということであり、表示調整部16Cは、図12(a)~(c)に示すように、表示していた指導者の圧力103b2と注視点103aの表示の透明度を高める透明度制御を行う。反対に、学習者の作業動作で指導者の作業動作との差分が大きくなるように変化していく動作要素について、表示調整部16Cは、透明度を低く、より強調して表示するようにする。 Specifically, for example, as shown in FIG. 12A, a circle 103b2 indicating the pressure of the instructor, a gaze point 103a of the instructor, and movement loci 103c and 103d are displayed, and the pressure and the gaze point of the learner are displayed. When the difference between the instructor's pressure and the gazing point changes so as to decrease, it means that the learner is working with an appropriate pressure and gazing point. As shown in 12(a) to 12(c), transparency control is performed to increase the transparency of the displayed pressure 103b2 and the gazing point 103a of the instructor. On the contrary, the display adjustment unit 16C displays the operation element, which has a low transparency and is emphasized, with respect to the operation element that changes so that the difference between the work operation of the learner and the work operation of the instructor increases.
 このような透明度制御により、学習者の苦手な動作要素(ここでは、正しい軌跡を描いてプローブを動かすこと)の表示が強調されるため、学習者は苦手な動作要素に注目しながら訓練を行うことができる。 By such transparency control, the display of the action elements that the learner is not good at (here, drawing the correct trajectory and moving the probe) is emphasized, so the learner conducts the training while paying attention to the action elements that he/she is not good at. be able to.
 [プログラム構成・訓練全体の流れ]
 以下、本実施形態の学習支援システムが実行する、技術訓練のプログラムの構成の例と、訓練全体の流れについて説明する。情報格納部17には、学習支援プログラムとして、例えば表1に示すように複数のモードが格納されている。具体的には、先述のビギナーモード、ノーマルモードに加えて、レベル判定モード、ガイダンスモード、および振り返りモードが情報格納部17に格納されている。
[Program structure and overall training flow]
Hereinafter, an example of the configuration of the technical training program executed by the learning support system of the present embodiment and the flow of the entire training will be described. The information storage unit 17 stores a plurality of modes as a learning support program as shown in Table 1, for example. Specifically, in addition to the above-mentioned beginner mode and normal mode, the level determination mode, the guidance mode, and the look-back mode are stored in the information storage unit 17.
 これらのモードを実行する場合、制御システム101の演算部16は、図13に示すように、映像取り込み部161、重畳表示部162、差分算出部163、差分判定部16A、表示調整部16Cに加えて、差分判定部16Aによる差分判定結果から学習者102の技術レベルを評価するレベル評価部16Bと、学習者102のスキルデータを統計演算する統計演算部16Dとを備えている。 When executing these modes, the calculation unit 16 of the control system 101, in addition to the video capturing unit 161, the superimposition display unit 162, the difference calculation unit 163, the difference determination unit 16A, and the display adjustment unit 16C, as illustrated in FIG. A level evaluation unit 16B that evaluates the skill level of the learner 102 based on the difference determination result by the difference determination unit 16A and a statistical calculation unit 16D that statistically calculates the skill data of the learner 102.
 重畳表示部162が学習者の視野映像とお手本動画の位置を揃えて重ねる方法には、公知の方法を用いることができる。例えば重畳表示部162は、学習者102の身体(手など)や超音波検査の対象物、学習者が持つプローブなど、技術習得中に学習者102の視界に入るもの、あるいはその周囲に取り付けられた印を基準マーカーとして用い、お手本動画の表示位置を決定する。 A known method can be used as a method in which the superimposed display unit 162 aligns and superimposes the learner's visual field image and the model moving image. For example, the superimposed display unit 162 is attached to or around the body (hand, etc.) of the learner 102, an object to be ultrasonically examined, a probe possessed by the learner, or the like, which is in the line of sight of the learner 102 during skill acquisition. The mark is used as a reference marker to determine the display position of the model video.
 演算部16は、CPU或いはGPUに搭載されるソフトウェアとしてその機能が実現される。また演算部16の一部または全部の機能は、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programable Gate Array)などのハードウェアで実現することも可能である。 The function of the calculation unit 16 is realized as software mounted on the CPU or GPU. Further, some or all the functions of the arithmetic unit 16 can be realized by hardware such as ASIC (Application Specific Integrated Circuit) and FPGA (Field Programmable Gate Array).
 情報格納部17には、技術習得のための訓練モードを含む学習支援プログラムに加え、学習者102の基本情報(例えば年齢、利き手、くせ、視力、握力などの個々人の身体能力の違い)、訓練の種類に応じて指導者が行う作業動作に関するお手本データ、学習者102の技術データのログ、学習者102の弱点リスト等を含む学習者102のデータテーブルが格納されている。情報格納部17に格納されているお手本データには、お手本動画、指導者の作業動作の複数の動作要素(位置、スピード、圧力、動作軌跡、注視点、及び音)、お手本動画の基本情報(例えば利き手、お手本動画で超音波検査を行う検査部位、検査内容、患者の体型)、各動作要素について指導者と学習者の動作に差分が生じた場合の差分の表示方法等が含まれている。また情報格納部17には、カメラ15で撮像した映像や演算部16の演算結果が必要に応じて格納される。さらに情報格納部17は、表示部19に表示する映像を格納することもできる。 In the information storage unit 17, in addition to a learning support program including a training mode for skill acquisition, basic information of the learner 102 (for example, differences in individual physical abilities such as age, dominant hand, habit, visual acuity, grip strength) and training are provided. The data table of the learner 102 including the model data regarding the work operation performed by the instructor according to the type of, the log of the technical data of the learner 102, the weakness list of the learner 102, and the like is stored. The model data stored in the information storage unit 17 includes a model video, a plurality of motion elements of the instructor's work motion (position, speed, pressure, motion locus, gazing point, and sound), basic information of the model motion video ( For example, it includes the dominant hand, the examination site to perform ultrasonic examination in the model video, the examination content, the body type of the patient), the display method of the difference when the difference between the movements of the instructor and the learner for each movement element, etc. .. Further, the information storage unit 17 stores the image captured by the camera 15 and the calculation result of the calculation unit 16 as necessary. Further, the information storage unit 17 can also store the video to be displayed on the display unit 19.
 以下、学習支援システムによる訓練全体のフローについて図14、15を参照して説明する。このフローでは、学習者の作業動作と指導者の作業動作との差分を判断する動作要素として、位置、スピード、圧力、動作軌跡、注視点および音の6つの動作要素を用いる場合について説明する。 The following describes the overall flow of training by the learning support system with reference to FIGS. In this flow, a case will be described in which six motion elements of position, speed, pressure, motion locus, gazing point and sound are used as motion elements for determining the difference between the work motion of the learner and the work motion of the instructor.
 また、このフローではプローブ202の代わりに、学習者がセンシングデバイス204(図1参照)を持って訓練を行う場合について説明する。センシングデバイス204を用いて訓練を行う場合、メガネ型デバイス201のほかに、センシングデバイス204がネットワーク30を介して制御システム101に接続される。センシングデバイス204は、圧力センサ、傾きセンサ、温度センサ、湿度センサ、光センサ、学習者102に触覚フィードバックを与える触覚デバイス(例えばバイブレータ)などを備えることができる。センシングデバイス204が制御システム101に接続されている場合、センシングデバイス204が取得した情報は、演算部16による演算に利用することができる。また重重畳表示部162は、センシングデバイス204により検出された圧力や傾きなどの情報を、表示部19に表示可能な画像に変換し、学習者102の視野映像に重畳表示させることができる。センシングデバイス204に、カメラ15からは取得できない温度、湿度、光度などの情報を取得するセンサが備えられている場合、センサが取得した情報は表示調整部16Cによるお手本動画の表示制御に用いることができる。カメラ15から取得可能な情報であっても、センシングデバイス204を用いることにより、カメラ15よりも高精度にその情報を取得できる場合がある。なお、センシングデバイス204には訓練する作業に使用する道具(例えば超音波撮像装置のプローブ)と同様の形状をしているものを用いることが好ましい。 Further, in this flow, a case where the learner holds the sensing device 204 (see FIG. 1) for training instead of the probe 202 will be described. When training is performed using the sensing device 204, in addition to the glasses-type device 201, the sensing device 204 is connected to the control system 101 via the network 30. The sensing device 204 can include a pressure sensor, a tilt sensor, a temperature sensor, a humidity sensor, an optical sensor, a haptic device (e.g., a vibrator) that gives haptic feedback to the learner 102, and the like. When the sensing device 204 is connected to the control system 101, the information acquired by the sensing device 204 can be used for the calculation by the calculation unit 16. Further, the multiple superimposition display unit 162 can convert information such as pressure and inclination detected by the sensing device 204 into an image that can be displayed on the display unit 19 and superimpose the information on the visual field image of the learner 102. When the sensing device 204 is equipped with a sensor that acquires information such as temperature, humidity, and light intensity that cannot be acquired from the camera 15, the information acquired by the sensor can be used for display control of the model video by the display adjustment unit 16C. it can. Even if the information can be acquired from the camera 15, the information may be acquired with higher accuracy than the camera 15 by using the sensing device 204. It is preferable to use, as the sensing device 204, a tool having the same shape as a tool used for training (for example, a probe of an ultrasonic imaging apparatus).
[ステップS1]
 まず、制御システム101は、学習者から訓練を行うために必要な基本条件の入力を受け付ける。具体的には、制御システム101は、図16のようなUI画面を表示部19に表示させる。学習者は、表示されたUI画面に学習者の基本情報と訓練の種類を入力する。入力された情報はネットワーク30を介して制御システム101に送られ、情報格納部17に格納される。これにより訓練プログラムを実行するコンテンツが起動して、メガネ型デバイス201に配信される。
[Step S1]
First, the control system 101 receives an input of basic conditions required for training from a learner. Specifically, the control system 101 causes the display unit 19 to display a UI screen as shown in FIG. The learner inputs the learner's basic information and the type of training on the displayed UI screen. The input information is sent to the control system 101 via the network 30 and stored in the information storage unit 17. As a result, the content for executing the training program is activated and delivered to the glasses-type device 201.
[ステップS2]
 次に、制御システム101がメガネ型デバイス201にガイダンスモードのコンテンツを配信し、ガイダンスモードが実施される。ガイダンスモードでは、表示部19に表示されるガイドに従って学習者が作業動作を行うことにより、後に実行される訓練モードで学習者102が行うべき作業動作や、表示される内容の確認、訓練への動機づけ等、コンテンツの使用方法を学ぶことができる。このモードでは、学習者102の技術レベルと、学習者の作業動作と指導者の作業動作との差や、習熟度判定がどのように表示されるか、あるいは各訓練モードの違い等が学習者102に表示される。例えば、訓練モード中に学習者の作業動作が指導者の作業動作と乖離した場合、その情報がどのように表示されるかなどを、学習者102が体験して学べる。なお、このガイダンスモードは、レベル判定モードの後に実施されてもよく、学習者がコンテンツの使用方法を体得するまで繰り返し実施されてもよい。
[Step S2]
Next, the control system 101 distributes the content in the guidance mode to the glasses-type device 201, and the guidance mode is implemented. In the guidance mode, the learner performs a work operation according to the guide displayed on the display unit 19, so that the learner 102 should perform the work operation in the training mode to be executed later, confirm the displayed content, and perform training. You can learn how to use contents such as motivation. In this mode, the difference between the skill level of the learner 102, the work action of the learner and the work action of the instructor, how the proficiency level determination is displayed, the difference between the training modes, and the like are described. 102 is displayed. For example, when the work motion of the learner deviates from the work motion of the instructor during the training mode, the learner 102 can experience and learn how the information is displayed. The guidance mode may be performed after the level determination mode, or may be repeatedly performed until the learner learns how to use the content.
[ステップS3]
 次に、制御システム101がメガネ型デバイス201にレベル判定モードのコンテンツを配信してレベル判定モードが実施される。このモードの具体的なフローは後で説明するが、このモードでは、お手本動画は再生されない状態で学習者が作業を行う。差分判定部16Aとレベル評価部16Bは、学習者102の現在の習熟度の評価を行い、習熟度の評価に応じて次の訓練モード(ビギナーモードやノーマルモードなど)の難易度の設定を行う。習熟度の評価方法は各訓練モードにおけるレベル評価方法と同じである。
[Step S3]
Next, the control system 101 distributes the content in the level determination mode to the glasses-type device 201, and the level determination mode is executed. Although a specific flow of this mode will be described later, in this mode, the learner works while the model moving image is not reproduced. The difference determination unit 16A and the level evaluation unit 16B evaluate the current proficiency level of the learner 102, and set the difficulty level of the next training mode (beginner mode, normal mode, etc.) according to the proficiency level evaluation. .. The proficiency evaluation method is the same as the level evaluation method in each training mode.
[ステップS4]
 次にレベル判定モードで設定された訓練モードで訓練が行われる。訓練モードにおける具体的なフローは図3で説明した通りであり、差分算出部163は、カメラ15やセンシングデバイス202から得られた学習者102の視野映像に基づき、各動作要素(位置、スピード、圧力、動作軌跡、注視点、音)について、学習者102の動作とお手本動画に含まれる指導者の動作との差分を求める。このとき、差分算出部163は、図17に示すように、6つの動作要素を単位時間ごとに割り当て、単位時間毎に分けられた動作要素それぞれの差分を求める。差分判定部16Aは、図17のスパイダーチャートに示すように差分算出部163により各動作要素についてそれぞれ算出された差分が、それぞれの動作要素について予め定められた許容可能な差分の大きさ(閾値)116以上であるか否かを判定する。表示調整部16Cは、学習者と指導者の差分が閾値以上の場合、お手本動画の表示内容を調整する。なお、差分が閾値116以下となった動作要素(この図では音以外の5つの動作要素)は、訓練中、学習者が上手にできていた動作要素であり、差分が閾値116を超えた動作要素(この図では音)は、学習者が上手くできなかった動作要素である。
[Step S4]
Next, the training is performed in the training mode set in the level determination mode. The specific flow in the training mode is as described in FIG. 3, and the difference calculation unit 163, based on the visual field image of the learner 102 obtained from the camera 15 and the sensing device 202, each operation element (position, speed, Regarding pressure, motion locus, gazing point, and sound), a difference between the motion of the learner 102 and the motion of the instructor included in the model video is calculated. At this time, as shown in FIG. 17, the difference calculation unit 163 allocates six operation elements for each unit time and obtains the difference between each operation element divided for each unit time. As shown in the spider chart of FIG. 17, the difference determination unit 16A determines that the difference calculated by the difference calculation unit 163 for each operation element is the allowable difference size (threshold value) predetermined for each operation element. It is determined whether it is 116 or more. When the difference between the learner and the instructor is equal to or greater than the threshold value, the display adjustment unit 16C adjusts the display content of the model moving image. It should be noted that the motion elements whose difference is less than or equal to the threshold value 116 (five motion elements other than sound in this figure) are motion elements that the learner was able to do well during the training, and motions whose difference exceeds the threshold value 116. Elements (sounds in this figure) are motion elements that the learner could not do well.
[ステップS5]
 訓練モードが終了すると、制御システム101が、習熟度を判定するコンテンツを起動し、レベル評価部16Bが、訓練中の学習者の技術習熟度を判定する。具体的にはレベル評価部16Bは、差分判定部16Aによる差分判定結果を用いて、図17のスパイダーチャートに示すように、6つの動作要素のうち差分が閾値116以下となった動作要素がいくつあるかに応じて学習者の習熟度を評価する。
[Step S5]
When the training mode ends, the control system 101 activates the content for determining the skill level, and the level evaluation unit 16B determines the skill level of the learner under training. Specifically, as shown in the spider chart of FIG. 17, the level evaluation unit 16B uses the difference determination result by the difference determination unit 16A to determine which of the six operating elements has a difference of 116 or less. Evaluate the learner's proficiency according to the existence
 閾値116以下になった動作要素の数と、習熟度との関係は任意に設定可能である。例えば6つの動作要素のうち、閾値116以下となった動作要素の数とレベル数を揃えてもよく、この図のように閾値116以下となった動作要素の数が5動作要素の場合、レベル評価部16Bはこの学習者の習熟度を「レベル5」と判定してもよい。 The relationship between the number of motion elements that have become less than or equal to the threshold value 116 and the proficiency level can be set arbitrarily. For example, out of the six action elements, the number of action elements having a threshold value of 116 or less and the number of levels may be the same, and when the number of action elements having a threshold value of 116 or less is five, as shown in FIG. The evaluation unit 16B may determine the proficiency level of this learner as "level 5".
 また判定された習熟度に応じて、次に同じ学習者102が行う学習において、学習モードや重点的に学習する内容を変更することができる。また例えばビギナーモードでの訓練結果で、全動作要素の差分が閾値116以下になった等、学習者のレベルがある一定レベルを超えた場合、次の訓練時には次のレベルのノーマルモードに進めるようにしてもよい。 In addition, depending on the determined proficiency level, it is possible to change the learning mode and the content to be focused on in the learning to be performed next by the same learner 102. Further, for example, when the level of the learner exceeds a certain level such as the difference of all the motion elements becomes equal to or smaller than the threshold value 116 in the training result in the beginner mode, the next mode of training is advanced to the normal mode. You can
[ステップS6]
 次に、ステップS5で判定された習熟度がメガネ型デバイス201により学習者に表示され、習熟度に応じて、振り返りモードが実行される。振り返りモードでは、学習者102に対して、ステップS4の訓練モードでの学習者の作業動作が再生され、学習者に現在の技術レベルを客観的に把握させる。このステップでは、ステップS3で判定された初期の習熟度よりも、どれだけ習熟度が向上したかを学習者に表示することが好ましい。
[Step S6]
Next, the proficiency level determined in step S5 is displayed to the learner by the glasses-type device 201, and the look-back mode is executed according to the proficiency level. In the look-back mode, the learner 102 is reproduced the work motion of the learner in the training mode of step S4, and the learner objectively grasps the current skill level. In this step, it is preferable to display to the learner how much the proficiency level has improved compared to the initial proficiency level determined in step S3.
 例えば訓練モード中にカメラ15により学習者の撮像された視野映像が情報格納部17に格納されていた場合、このモードで重畳表示部162は、格納された学習者の視野映像を読み出し、読み出した視野映像にお手本動画を重畳させた振り返り動画を表示する。特に、差分が閾値116以上になった(学習者が上手にできなかった)動作要素の原因となる学習者102の作業動作を呼び出して再生してもよい。これにより、学習者は、現在の自分の技術がお手本技術に対してどれだけ違うか、あるいは自分の弱点がどこにあるかを客観的に把握することができる。 For example, when the visual field image of the learner captured by the camera 15 is stored in the information storage unit 17 during the training mode, the superimposed display unit 162 reads the stored visual field image of the learner in this mode. A retrospective video with a model video superimposed on the field of view video is displayed. In particular, the work motion of the learner 102, which causes the motion element in which the difference becomes equal to or larger than the threshold value 116 (that the learner cannot do well), may be called and reproduced. This allows the learner to objectively grasp how different his/her current technique is from the model technique or where his/her weaknesses are.
 この振り返りモードで統計演算部16Dは、差分判定部16Aが判定した各動作要素の差分判定結果に基づいて、統計グラフ(ヒストグラムや図17のようなレーダーチャートなど)を生成し、統計グラフで学習者の技術レベルを表示してもよい。また統計グラフの代わりに、例えば図18に示すようなスキルデータや弱点のリストが表示部19に表示されることで学習者に表示されてもよい。統計グラフやリストが表示されることにより、学習者102は自分の得意または不得意な動作要素を確認しやすい。このリストや統計グラフは、情報格納部17に格納される。 In this look-back mode, the statistical calculation unit 16D generates a statistical graph (such as a histogram or a radar chart as shown in FIG. 17) based on the difference determination result of each operation element determined by the difference determination unit 16A, and learns with the statistical graph. The skill level of the person may be displayed. Further, instead of the statistical graph, for example, skill data and a list of weak points as shown in FIG. 18 may be displayed on the display unit 19 to be displayed to the learner. By displaying the statistical graph or the list, the learner 102 can easily confirm his/her own strengths or weaknesses. The list and the statistical graph are stored in the information storage unit 17.
 振り返りモードは、レベル判定モードの後に実行されてもよく、各訓練モードの後に、毎回実行されることが好ましい。振り返りモードをレベル判定モードや訓練モードの実施後に都度実施することにより、学習者102は自分の現在の技術レベルを認識しながら、次の訓練を行うことができる。
 以上の工程により、学習者102が技術を習得する訓練が実施される。
The retrospective mode may be executed after the level determination mode, and is preferably executed after each training mode. By performing the look-back mode each time after performing the level determination mode or the training mode, the learner 102 can perform the next training while recognizing his/her current skill level.
Through the above steps, the training for the learner 102 to acquire the technique is carried out.
 ここで、ステップS3のレベル判定モードの具体的なフローについて図15および図19を参照して説明する。 Here, a specific flow of the level determination mode of step S3 will be described with reference to FIGS. 15 and 19.
[ステップS31]
 レベル判定モードが始まると、メガネ型デバイス201の表示部19にはレベル判定モードのコンテンツが表示され、カメラ15が学習者102の視野映像から学習者102の視野映像を撮像する。カメラ15から得られた視野映像は、制御システム101に送信される。またセンシングデバイス202は、学習者102の作業動作を各センサで計測し、計測したデータを制御システム101に送信する。
[Step S31]
When the level determination mode starts, the content of the level determination mode is displayed on the display unit 19 of the glasses-type device 201, and the camera 15 captures the view image of the learner 102 from the view image of the learner 102. The visual field image obtained from the camera 15 is transmitted to the control system 101. Further, the sensing device 202 measures the work operation of the learner 102 with each sensor, and transmits the measured data to the control system 101.
 レベル判定モードでは、後に学習者が行う訓練モードと同じ作業でレベル判定を行ってもよいし、訓練モードよりも簡単な基礎作業でレベル判定を行ってもよい。訓練モードと同じ作業でレベル判定を行う場合、情報格納部17に格納するモードを少なくできる。一方、訓練モードで行う作業よりも簡単な基礎作業でレベル判定を行う場合、情報格納部17に格納するモードは増えるが、学習者102のレベルが基礎レベルに到達しているか否かを判断することができる。 In the level judgment mode, the level judgment may be performed by the same work as the training mode performed later by the learner, or the level judgment may be performed by basic work simpler than the training mode. When the level is determined by the same work as the training mode, the number of modes stored in the information storage unit 17 can be reduced. On the other hand, when the level determination is performed by a basic work that is easier than the work performed in the training mode, the number of modes stored in the information storage unit 17 increases, but it is determined whether the level of the learner 102 has reached the basic level. be able to.
[ステップS32]
 カメラ15から得られた情報に基づき、差分算出部163は、6つの動作要素について学習者102の作業動作と、お手本(このモードでは表示されない)の作業動作との差分を算出する。
[Step S32]
Based on the information obtained from the camera 15, the difference calculation unit 163 calculates the difference between the work motion of the learner 102 and the work motion of the model (not displayed in this mode) for the six motion elements.
[ステップS33]
 差分判定部16Aは、各動作要素において、学習者102の手との差分が、それぞれ閾値以上であるか否かを判定し、判定した差分判定結果をレベル評価部16Bに送信する。
[Step S33]
The difference determination unit 16A determines whether or not the difference with the hand of the learner 102 is greater than or equal to a threshold in each action element, and transmits the determined difference determination result to the level evaluation unit 16B.
[ステップS34]
 レベル評価部16Bは、差分判定結果に応じて学習者の習熟度を判定し、ノーマルモードで訓練を行うか、あるいはビギナーモードで訓練を行うかを判断する。
[Step S34]
The level evaluation unit 16B determines the proficiency level of the learner according to the difference determination result, and determines whether to perform the training in the normal mode or the beginner mode.
 レベル評価部16Bは、学習者のレベルが所定レベル以上の場合、ノーマルモードで訓練を行うことを決定し、学習者のレベルが所定レベル以下の場合、ビギナーモードで訓練を行うことを決定する。 The level evaluation unit 16B determines to perform training in the normal mode when the level of the learner is equal to or higher than the predetermined level, and determines to perform training in the beginner mode when the level of the learner is equal to or lower than the predetermined level.
[ステップS35]
 レベル評価部16Bが、ステップS34で決定したレベル判定結果をメガネ型デバイス201に送信し、学習者のレベルが表示される。また、訓練モードの実行後に行われる振り返りモードと同様に、レベル判定モードにおける学習者の作業動作を振り返るコンテンツを実行することが好ましい。
[Step S35]
The level evaluation unit 16B transmits the level determination result determined in step S34 to the glasses-type device 201, and the level of the learner is displayed. Further, similarly to the look-back mode performed after the execution of the training mode, it is preferable to execute the content that looks back on the work operation of the learner in the level determination mode.
[ステップS36]
 表示調整部16Cは、ステップS34で決定されたモードに応じて訓練用のコンテンツを構築し、メガネ型デバイス201に訓練用コンテンツを配信する。訓練用コンテンツを配信した後、学習支援システムによる訓練フローは図14のステップS4に戻り、訓練モードが実行される。表示調整部16Cは、構築する訓練モードに応じて、お手本動画の再生スピードを変えたり、お手本動画の再生スピード以外にも、その他の方法で表示を調整したりしてもよい。
[Step S36]
The display adjustment unit 16C constructs the training content according to the mode determined in step S34, and distributes the training content to the glasses-type device 201. After delivering the training content, the training flow by the learning support system returns to step S4 of FIG. 14, and the training mode is executed. The display adjusting unit 16C may change the reproduction speed of the model moving image or adjust the display by other methods in addition to the reproduction speed of the model moving image according to the training mode to be constructed.
 [その他の表示調整例]
 以上で説明した以外の手法を用い、各モードや各動作要素に応じてお手本動画の表示を調整する例を以下に示す。
 例えば差分判定部16Aの差分判定結果により、学習者102の注視点と指導者の注視点との差分が大きく、閾値よりずれていることがわかった場合、表示調整部16Cは、表示部19に正しい注視点を長時間表示させたり強調表示させたりするなどして、学習者102に正しい注視点を身につけさせることができる。
[Other display adjustment examples]
An example of adjusting the display of the model moving image according to each mode and each operation element by using a method other than that described above is shown below.
For example, when it is found from the difference determination result of the difference determination unit 16A that the difference between the gazing point of the learner 102 and the gazing point of the instructor is large and deviates from the threshold value, the display adjusting unit 16C causes the display unit 19 to display. It is possible to make the learner 102 acquire the correct gaze point by displaying or highlighting the correct gaze point for a long time.
 またビギナーモードでは、学習者の動作スピードに合わせて指導者の動作スピードを落とす調整を行うという説明をしたが、学習者の作業動作に指導者の作業動作が追従する方法には、再生スピードを落とす以外の方法もある。例えば表示調整部16Cは、学習者の動作が一定以上遅れた場合、お手本動画を一時停止し、ひとつ前の工程からやり直すように巻き戻して再生するコンテンツを構築してもよい。またノーマルモードは指導者の動作スピードに合わせて学習者が訓練するモードであると説明したが、ノーマルモードにおいても、学習者の作業動作が一定以上遅れた場合、表示調整部16Cは、お手本動画の再生を一時停止するなどして学習者の作業をお手本動画に追いつかせるようにしてもよい。 Also, in the beginner mode, I explained that adjustments are made to slow down the instructor's operation speed according to the learner's operation speed, but in the method in which the instructor's work operation follows the learner's work operation, the playback speed is set. There are other methods than dropping. For example, when the learner's motion is delayed by a certain amount or more, the display adjustment unit 16C may pause the model video and construct content to be rewound and played back so as to start over from the previous process. In addition, it was explained that the normal mode is a mode in which the learner trains according to the movement speed of the instructor, but even in the normal mode, if the learner's work movement is delayed by a certain amount or more, the display adjustment unit 16C is used as a model video. The learner's work may be able to catch up with the model video by pausing the playback of.
 表示調整部16Cは、学習者の作業動作の各動作要素がそれぞれ指導者の動作要素と一致した際に、一致したことを学習者に表示するようにコンテンツを構築してもよい。その表示方法としては、表示、音、あるいは振動によるフィードバックを用いることができる。一致したことを表示する手法が各動作要素で共通だと、学習者は、一致した動作要素について、お手本通りに作業できたことを認識しやすい。 The display adjustment unit 16C may construct the content so that when the respective motion elements of the work motion of the learner match the motion elements of the instructor, the match is displayed to the learner. As the display method, feedback by display, sound, or vibration can be used. If the method of displaying a match is common to all motion elements, the learner can easily recognize that the matched motion element can be worked as a model.
 表示調整部16Cは、一定期間みられた学習者の動作要素と指導者の動作要素とのずれが所定値以上改善された場合(学習者102の技術に一定以上の上達がみられた場合)や、ある一連の作業動作をクリアした場合、学習者102には、上達を実感させるフィードバックを与えるようにコンテンツを構築してもよい。例えば、表示調整部16Cは、表示、音、振動などのフィードバック、あるいは上達した動作要素を強調表示するなどにより、レベルアップしたことを学習者に表示することができる。 The display adjustment unit 16C improves the deviation between the learner's motion element and the instructor's motion element, which has been observed for a certain period of time, by a predetermined value or more (when the learner 102's technique has improved by a certain level or more). Alternatively, when a certain series of work operations is cleared, the content may be constructed so as to give the learner 102 feedback that makes the learner feel improvement. For example, the display adjustment unit 16C can display to the learner that the level has been increased by feedback such as display, sound, and vibration, or by highlighting an improved operation element.
 なお、表示調整部16Cは、学習者102に、表示、音による視覚的、聴覚的フィードバックに加え、センシングデバイス202により、振動や衝撃など触覚によるフィードバックを学習者に与えるコンテンツを構築し、学習者に実際に作業をしている感覚をもたせるようにしてもよい。 In addition, the display adjustment unit 16C constructs the content that gives the learner tactile feedback such as vibration and impact by the sensing device 202 in addition to the visual and auditory feedback by display and sound to the learner. You may make it have a feeling of actually working.
 以上のように、第1実施形態の学習支援システムでは、学習者の作業動作に合わせてお手本動画の表示を動的に変化させることができる。そのため、レベルの異なる学習者、あるいは特徴の異なる学習者など、学習者それぞれが、それぞれに合った学習ペースで技術を訓練することができ、熟練した技術をより習得しやすくなる。 As described above, in the learning support system of the first embodiment, the display of the model video can be dynamically changed according to the work operation of the learner. Therefore, each learner, such as a learner with a different level or a learner with different characteristics, can train the technique at a learning pace that suits them, and it becomes easier to learn the skill.
 また、第1実施形態の学習支援システムでは、技術の習得の際に必要な、言葉などで表現しにくい微妙なコツを動作要素として抽出し、動作要素ごとに指導者と学習者の技術の差分求めて差分判定結果を表示するため、学習者は、技術の高度で細かなコツと、自分がそのコツをどれだけつかめているかを、動作要素ごとに把握することができる。差分判定結果は学習者が直感的に学べるような表示方法で表示されるため、学習者は、直感的な動作により技術を習得することができる。 In addition, in the learning support system of the first embodiment, subtle tips that are difficult to express in words and the like, which are necessary for learning the technique, are extracted as motion elements, and the difference between the techniques of the instructor and the learner for each motion element. Since the difference determination result is obtained and displayed, the learner can grasp, for each action element, the skill of the skill and the detailed technique, and how much the skill the user has. Since the difference determination result is displayed in a display method that the learner can intuitively learn, the learner can learn the technique by intuitive operation.
 また第1実施形態の学習支援システムでは、圧力や視線など、本来は見た目だけではわかりにくい情報が学習者の視野映像に重畳表示されるため、指導者から直接技術を教わるときには学びにくいコツであっても、学習者は確認しながら学習することができる。 Further, in the learning support system of the first embodiment, information such as pressure and line of sight, which is originally difficult to understand only by appearance, is superimposed and displayed on the learner's visual field image, so it is a trick that is difficult to learn when the instructor directly teaches the technique. However, the learner can learn while checking.
 なお、第1実施形態の学習支援システムでは、1人の学習者が制御システム101に格納されている学習支援プログラムを用いて訓練を行う例を示したが、複数の学習者が制御システムにネットワークを介して接続されているメガネ型デバイスをそれぞれ用いることにより、同時にあるいは異なるタイミングで訓練することができる。そのため、第1実施形態の学習支援システムでは、指導者が取得した高度な技術を、多くの学習者に効率よく伝達することができる。 In the learning support system of the first embodiment, an example in which one learner performs training using the learning support program stored in the control system 101 has been shown. It is possible to train at the same time or at different timings by using each of the eyeglass-type devices connected via the. Therefore, in the learning support system of the first embodiment, the advanced technology acquired by the instructor can be efficiently transmitted to many learners.
 また、この実施形態では、ビギナーモードとノーマルモードの2つの訓練モードで技術の習熟を図るものとしたが、訓練モードは1モードであってもよいし、3つ以上のモードであってもよい。さらに、情報格納部17は、上述のモード以外にも学習者の技術習得に有効なモードを格納していてもよい。 Further, in this embodiment, the skill is mastered in the two training modes of the beginner mode and the normal mode, but the training mode may be one mode or three or more modes. .. Further, the information storage unit 17 may store a mode effective for a learner's skill acquisition in addition to the above modes.
 また、制御システム101の内部構成(演算部16と情報格納部17)は、ネットワーク30を介してメガネ型デバイス201に接続されたクラウドに格納されていてもよい。 Further, the internal configuration of the control system 101 (the calculation unit 16 and the information storage unit 17) may be stored in a cloud connected to the glasses-type device 201 via the network 30.
<<第2実施形態の学習支援システム>>
 以下、第2実施形態の学習支援システムについて、第1実施形態の学習支援システムと異なる点を説明する。第2実施形態の学習支援システムは、第1実施形態の学習支援システムと同様に、メガネ型デバイス201と、メガネ型デバイス201にネットワークを介して接続された制御システム101とを備えている。但し、第1実施形態の学習支援システムでは、制御システム101の情報格納部17にすでに格納されている指導者のお手本データを用いて表示されるお手本動画の表示調整を行うが、第2実施形態の学習支援システムでは、情報格納部17に現状では格納されていない指導者のお手本動画から新たなお手本データを生成し、新たに生成したお手本データを表示調整に用いる。この点で、第2実施形態の学習支援システムは第1実施形態の学習支援システムとは異なっている。
<<Learning Support System of Second Embodiment>>
The differences between the learning support system of the second embodiment and the learning support system of the first embodiment will be described below. The learning support system of the second embodiment includes a glasses-type device 201 and a control system 101 connected to the glasses-type device 201 via a network, like the learning support system of the first embodiment. However, in the learning support system of the first embodiment, the display adjustment of the model moving image displayed using the model data of the instructor already stored in the information storage unit 17 of the control system 101 is performed. In the learning support system, the new model data is generated from the model video of the instructor that is not currently stored in the information storage unit 17, and the newly generated model data is used for display adjustment. In this respect, the learning support system of the second embodiment is different from the learning support system of the first embodiment.
 第2実施形態の学習支援システムは、図20に示すように、学習者102が装着可能なメガネ型デバイス201と、学習者とは離れた場所にいる指導者103が装着可能なデバイス(例えばメガネ型デバイス)203と、メガネ型デバイス201およびメガネ型デバイス203にネットワーク30を介して接続された制御システム101とを備えている。 As shown in FIG. 20, the learning support system of the second embodiment includes a glasses-type device 201 that can be worn by the learner 102 and a device that can be worn by the instructor 103 that is away from the learner (for example, glasses). Type device) 203, and a glasses-type device 201, and a control system 101 connected to the glasses-type device 203 via the network 30.
 メガネ型デバイス203は、メガネ型デバイス201と同様の構成であり図21に示すように、指導者103の視野映像を撮像するカメラ15B、無線通信デバイス13、CPU12、メモリ18、および学習者の動作映像が表示される表示部19Bを備えている。またメガネ型デバイス203は、必要に応じて作業音を録音する機器、指導者の作業音を再生するスピーカーなどを備えていてもよい。 The glasses-type device 203 has the same configuration as the glasses-type device 201, and as shown in FIG. 21, the camera 15B for capturing the visual field image of the instructor 103, the wireless communication device 13, the CPU 12, the memory 18, and the learner's operation. A display unit 19B for displaying an image is provided. Further, the glasses-type device 203 may be provided with a device for recording the work sound, a speaker for reproducing the work sound of the instructor, and the like as necessary.
 第2実施形態において、制御システム101は、第1実施形態における制御システム101と同様の構成であり、メガネ型デバイス201およびメガネ型デバイス203から受け取った情報を処理する演算部16と、お手本データを格納している情報格納部17とを備えている。 In the second embodiment, the control system 101 has the same configuration as that of the control system 101 in the first embodiment, and includes a calculation unit 16 that processes information received from the glasses-type device 201 and glasses-type device 203, and model data. It includes an information storage unit 17 that stores the information.
 また第2実施形態において、演算部16は、図22に示すように、学習者102の視野映像と指導者103の視野映像を取り込む映像取り込み部161と、重畳表示部162と、差分算出部163と、差分判定部16Aと、表示調整部16Cと、指導者の視野映像からお手本データを生成するお手本生成部164とを少なくとも備えている。 Further, in the second embodiment, as shown in FIG. 22, the calculation unit 16 includes a video capture unit 161 that captures the visual field image of the learner 102 and the visual field image of the instructor 103, the superimposed display unit 162, and the difference calculation unit 163. At least a difference determination unit 16A, a display adjustment unit 16C, and a model generation unit 164 that generates model data from the instructor's visual field image.
 以下、第2実施形態の学習支援システムの訓練動作の流れについて、第1実施形態の学習支援システムの訓練動作の流れと異なる点を、図23を参照して説明する。まず学習者が訓練のための作業を行う。カメラ15が学習者102の視野映像を撮像する(ステップS41)。映像取り込み部161はカメラ15が撮像した視野映像を受け付け、学習者の視野映像を、指導者103のメガネ型デバイス203の表示部19Bに出力する(ステップS41B1)。指導者は、表示部19Bに表示された学習者の訓練動作を見て、学習者の作業動作の特徴を把握し、学習者の作業動作の特徴に合うようなスタイルでお手本となる作業を行う。カメラ15Bは、指導者103がお手本となる作業を行っている間、指導者103の視野映像を撮像する。(ステップS41B2)。映像取り込み部161は、カメラ15Bが撮像した視野映像を受け付けるお手本生成部164は、指導者103の視野映像からお手本動画を生成するとともに、指導者103のお手本動画からお手本データを生成する(ステップS41B3)。指導者103のお手本動画から新たに生成されたお手本データは情報格納部17に格納され、表示調整部16Cが学習者の作業動作に合わせて表示内容を調整する際に用いられる。第2実施形態において、表示調整部16Cがお手本動画と学習者の視野映像を重畳表示するステップS42以降(ステップS42~S45)は、第1実施形態と同じである。 Hereinafter, with respect to the flow of the training operation of the learning support system of the second embodiment, differences from the flow of the training operation of the learning support system of the first embodiment will be described with reference to FIG. First, the learner performs work for training. The camera 15 captures the visual field image of the learner 102 (step S41). The video capturing unit 161 receives the visual field image captured by the camera 15, and outputs the visual field image of the learner to the display unit 19B of the glasses-type device 203 of the instructor 103 (step S41B1). The instructor sees the learner's training movement displayed on the display unit 19B, grasps the characteristics of the learner's work movement, and performs a model work in a style that matches the characteristics of the learner's work movement. .. The camera 15B captures a visual field image of the instructor 103 while the instructor 103 is performing a model work. (Step S41B2). The image capturing unit 161 receives the visual field image captured by the camera 15B, and the model generation unit 164 generates a model video from the visual field image of the instructor 103 and also generates model data from the example video of the instructor 103 (step S41B3). ). The model data newly generated from the model video of the instructor 103 is stored in the information storage unit 17, and is used when the display adjustment unit 16C adjusts the display content according to the work operation of the learner. In the second embodiment, the step S42 and subsequent steps (steps S42 to S45) in which the display adjusting unit 16C superimposes and displays the model moving image and the visual field image of the learner are the same as those in the first embodiment.
 第2実施形態の学習支援システムでは、第1実施形態の学習支援システムで得られる効果に加えて、情報格納部に格納されていない指導者の技術を、学習者の学習支援に活用することができるという効果が得られる。また第2の実施形態の学習支援システムでは、学習者と離れた場所にいる指導者の技術を、学習者に学ばせることができる。 In the learning support system of the second embodiment, in addition to the effects obtained by the learning support system of the first embodiment, the instructor's technique not stored in the information storage unit can be utilized for the learner's learning support. The effect that can be obtained is obtained. Further, in the learning support system of the second embodiment, the learner can learn the technique of the instructor who is away from the learner.
 なお、第2実施形態の学習支援システムでは、情報格納部に格納されていない指導者の作業動画を録画してお手本動画を作成し、作成したお手本動画を学習者の学習支援に活用するものとしたが、学習者が離れたところにいる指導者から訓練のサポートを受けられるようにする例は、上述の例に限られない。 In the learning support system of the second embodiment, the work video of the instructor not stored in the information storage unit is recorded to create a model video, and the created model video is used for the learner's learning support. However, the examples of allowing learners to receive training support from remote leaders are not limited to the above examples.
 例えば、指導者側のメガネ型デバイスと学習者側のメガネ型デバイスに互いの映像や音声を送るようにしてもよい。具体的には、指導者側のメガネ型デバイスには、学習者がメガネ型デバイスを使用して訓練動作をしている映像を表示し、指導者が学習者の訓練動作の様子を見ながら、作業のコツなどを動作や声がけにより学習者にアドバイスする。指導者による動作映像のデータや声がけの音声データが学習者側のメガネ型デバイスに送られて、学習者側のメガネ型デバイスに指導者の動作映像が表示されたり、指導者の音声が再生されたりする。このように、学習者とは離れたところにいる指導者によるアドバイスを訓練中の学習者に伝えることにより、学習者はより効率よく技術を習得できる。 For example, it is possible to send each other's video and audio to the instructor's glasses-type device and the learners' glasses-type device. Specifically, on the glasses-type device on the side of the instructor, an image of the learner performing a training motion using the glasses-type device is displayed, and the instructor watches the state of the training motion of the learner, Advise the learner on the tips of the work, etc. by movements and voices. The instructor's action image data and voice data are sent to the learner's glasses-type device, and the instructor's action image is displayed on the learner's glasses-type device, and the instructor's voice is played. It will be done. In this way, the learner can learn the technique more efficiently by transmitting the advice from the instructor who is far from the learner to the learner who is training.
<<第3実施形態の学習支援システム>>
 以下、第3実施形態の学習支援システムについて、第1実施形態の学習支援システムと異なる点を説明する。第3実施形態の学習支援システムでは、複数の指導者の技術を学習した人工知能(AI)により、学習者の作業動作が進むにつれて学習者の作業動作の特徴に最も近いお手本動画が選択されて、表示されるお手本動画が時間ごとに変更される。
<<Learning Support System of Third Embodiment>>
Hereinafter, the differences between the learning support system of the third embodiment and the learning support system of the first embodiment will be described. In the learning support system of the third embodiment, the artificial intelligence (AI) that has learned the techniques of a plurality of instructors selects the model video that is the closest to the characteristics of the learner's work motion as the learner's work motion progresses. , The displayed model video changes every hour.
 第3実施形態の学習支援システムは、第1実施形態の学習支援システムと同様に、メガネ型デバイス201と、メガネ型デバイス201にネットワークを介して接続された制御システム101とを備えている。 Like the learning support system of the first embodiment, the learning support system of the third embodiment includes a glasses-type device 201 and a control system 101 connected to the glasses-type device 201 via a network.
 第3実施形態において、制御システム101は、第1実施形態における制御システム101と同様の構成であり、図24に示すように、演算部16と、情報格納部17とを備えている。また第3実施形態における演算部16の構成は第1実施形態における演算部16と同様の構成であり、映像取り込み部161と、重畳表示部162と、差分算出部163と、差分判定部16Aと、表示調整部16Cとを少なくとも備えている。 In the third embodiment, the control system 101 has the same configuration as the control system 101 in the first embodiment, and includes a calculation unit 16 and an information storage unit 17 as shown in FIG. 24. Further, the configuration of the calculation unit 16 in the third embodiment is the same as that of the calculation unit 16 in the first embodiment, and the video capture unit 161, the superimposed display unit 162, the difference calculation unit 163, and the difference determination unit 16A , A display adjusting unit 16C is provided at least.
 第3実施形態における情報格納部17は、学習者の基本情報、スキルデータ、および弱点リストを含む格納部171と、複数の指導者の技術の特徴を用いてお手本動画の表示を変更する人工知能部172とを有している。人工知能部172には、学習者の作業動作の特徴に最も近い特徴をもつ組み合わせデータを選択する学習済モデル172Aと、学習済モデル172Aにより選択された組み合わせデータから表示するお手本動画を選択する結果表示部172Bとを備えている。 The information storage unit 17 in the third embodiment is an artificial intelligence that changes the display of the model video by using the storage unit 171 including the learner's basic information, skill data, and the weakness list, and the technical features of a plurality of instructors. And a section 172. As a result of selecting the learned model 172A for selecting the combination data having the feature closest to the feature of the learner's work motion, and the model video to be displayed from the combination data selected by the learned model 172A, in the artificial intelligence unit 172. And a display unit 172B.
 学習済モデル172Aは、お手本動画に加え、学習者の特徴と指導者の特徴との組み合わせデータが多数学習されているモデルである。学習済モデル172Aは例えばニューラル・ネットワークにより構成されている。学習済モデル172Aが学習した学習者の特徴および指導者の特徴の組み合わせとしては、作業動作の6つの動作要素(位置、スピード、圧力、動作軌跡、注視点、及び音)、基本情報(例えば利き手、超音波検査の訓練を行う検査部位、検査内容、患者の体型)等がそれぞれ含まれている。 The learned model 172A is a model in which a large number of combination data of learner characteristics and instructor characteristics are learned in addition to the model video. The learned model 172A is composed of, for example, a neural network. As a combination of the learner's characteristics and the teacher's characteristics learned by the learned model 172A, there are six motion elements (position, speed, pressure, motion locus, gazing point, and sound) of work motion, basic information (for example, dominant hand). , The examination site where the ultrasonic examination is trained, the examination contents, and the patient's body shape) are included.
 以下、第3実施形態の学習支援システムの動作の流れについて、図25を参照して説明する。 Hereinafter, the operation flow of the learning support system of the third embodiment will be described with reference to FIG. 25.
 まずカメラ15が学習者102の視野映像を撮像する。次に、映像取り込み部161は、ネットワーク30を介してカメラ15により撮像された学習者の視野映像を取り込む。(ステップS41)。次に学習済モデル172Aは、学習者の視野映像から学習者の作業動作の特徴を抽出し、学習者の動作の特徴と、学習済モデル172Aに学習されている多数の組み合わせデータとを比較することにより、学習者の特徴に最も近い特徴をもつ組み合わせデータを選択する(ステップS411)。結果表示部172Bは、学習モデル172Aにより選択された組み合わせデータから、最も学習者の特徴に近い特徴をもつお手本動画を選択し、重畳表示部162に受け渡す。重畳表示部162は、受け取ったお手本動画を学習者の視野映像に重畳表示させる(ステップS42)。最も学習者の作業動作の特徴に近いお手本動画を表示したら、動作のフローはステップS41に戻り、このフローは単位時間毎に繰り返される。 First, the camera 15 captures the visual field image of the learner 102. Next, the image capturing unit 161 captures the view image of the learner captured by the camera 15 via the network 30. (Step S41). Next, the trained model 172A extracts the characteristics of the learner's work movement from the learner's visual field image, and compares the characteristics of the learner's movement with a large number of combination data learned in the trained model 172A. As a result, the combination data having the feature closest to the learner's feature is selected (step S411). The result display unit 172B selects, from the combination data selected by the learning model 172A, a model moving image having characteristics closest to the learner's characteristics, and transfers the model moving image to the superimposition display unit 162. The superimposition display unit 162 superimposes and displays the received model video on the visual field image of the learner (step S42). When the model moving image closest to the characteristics of the learner's work motion is displayed, the flow of motion returns to step S41, and this flow is repeated every unit time.
 このように第3実施形態の学習支援システムでは、学習者の作業動作の特徴に最も近いお手本動画が随時変更されて表示されるため、時間ごとに最も学習者の指導に適したお手本動画が表示される。これにより訓練中どのタイミングであっても学習者の特徴に合う訓練コンテンツを提供することができ、学習者はより、効率よく技術を学ぶことができるようになる。 As described above, in the learning support system of the third embodiment, the model video closest to the characteristics of the learner's work motion is changed and displayed, so that the model video most suitable for the learner's instruction is displayed every hour. To be done. This makes it possible to provide training content that matches the characteristics of the learner at any time during the training, and the learner can learn the technique more efficiently.
<<第3実施形態の変形例>>
 複数の指導者の技術を学習した人工知能(AI)を用いてお手本動画の調整を行う他の例について、図26を用いて説明する。この変形例では、人工知能部は、学習者と指導者の作業動作で差分が閾値より大きい動作要素について、各動作要素の差分の大きさに応じて、差分を表示する手法を随時変更する。
<<Modification of Third Embodiment>>
Another example of adjusting the model moving image using artificial intelligence (AI) that has learned the techniques of a plurality of instructors will be described with reference to FIG. In this modified example, the artificial intelligence unit changes the method of displaying the difference between the work elements of the learner and the instructor in which the difference is larger than the threshold value, depending on the magnitude of the difference between the respective operation elements.
 変形例の学習支援システムは、第3実施形態の学習支援システムと同じ構成であるが、学習済モデル172Aに学習されている組み合わせデータの内容が異なる。変形例の学習支援システムにおいて、学習済モデル172Aには、動作要素ごとの差分の大きさと、差分の大きさを表示する手法として最適な手法との組み合わせが多数含まれている。例えば学習者と指導者のヒジの位置の差分について、差分が所定値以下のときは図5のように「もっとヒジを持ち上げて!」と文字で表示し、差分が所定値より大きいときは矢印19bのようなサインで示し、さらに差分が大きいときは図6に示すような骨格図19cをお手本動画の指導者の手に重畳させるなど、差分の大きさと最適な表示の手法との組み合わせが学習済モデル172Aに学習されている。 The learning support system of the modification has the same configuration as the learning support system of the third embodiment, but the contents of the combination data learned by the learned model 172A are different. In the learning support system of the modified example, the learned model 172A includes a large number of combinations of the magnitude of the difference for each motion element and the optimal method for displaying the magnitude of the difference. For example, regarding the difference between the positions of the elbows of the learner and the instructor, when the difference is less than or equal to a predetermined value, it is displayed with the letters “lift the elbow more!” as shown in FIG. The combination of the size of the difference and the optimum display method is learned, such as showing with a sign like 19b and superimposing the skeleton diagram 19c as shown in Fig. 6 on the hand of the instructor of the model video when the difference is large. Learned by the model 172A.
 また学習済モデル172Aには、お手本動画、学習者の特徴と指導者の特徴なども含まれている。学習者の特徴および指導者の特徴としては、作業動作の6つの動作要素(位置、スピード、圧力、動作軌跡、注視点、及び音)、基本情報(例えば利き手、超音波検査の訓練を行う検査部位、検査内容、患者の体型)等がそれぞれ含まれている。 Also, the learned model 172A includes model videos, learner characteristics and teacher characteristics. The characteristics of the learner and the instructor are the six motion elements (position, speed, pressure, motion trajectory, gazing point, and sound) of the work motion, and basic information (for example, dominant hand, examination for training ultrasonic examination). The site, examination contents, patient's body type, etc. are included.
 以下、第3実施形態の変形例の学習支援システムの動作の流れについて説明する。 The flow of operation of the learning support system of the modified example of the third embodiment will be described below.
 変形例の学習支援システムのフローにおいて、カメラ15が学習者102の視野映像を撮像してから、お手本動画が視野映像に重畳されるまでのステップS41、ステップS42は、第1実施形態におけるフロー(図3)と同じである。お手本動画が学習者の視野映像に重畳されたら、差分算出部163は、学習者の視野映像から学習者の6つの動作要素をそれぞれ抽出し、モデル172Aから指導者の6つの動作要素をそれぞれ読み出す。次に差分算出部163は、動作要素ごとに学習者と指導者の動作の差分を算出する。(ステップS43C)。 In the flow of the learning support system of the modified example, steps S41 and S42 from when the camera 15 captures the visual field image of the learner 102 to when the model video is superimposed on the visual field image are the flow in the first embodiment ( 3). When the model video is superimposed on the learner's visual field image, the difference calculation unit 163 extracts each of the learner's six motion elements from the learner's visual field image, and reads out each of the instructor's six motion elements from the model 172A. .. Next, the difference calculation unit 163 calculates the difference between the action of the learner and the action of the instructor for each action element. (Step S43C).
 次に差分判定部16Aは、各動作要素について、差分算出部163が求めた差分が閾値以上であるか否かをそれぞれ判定し、判定結果を人工知能部172に通知する(ステップS44C)。 Next, the difference determination unit 16A determines whether or not the difference calculated by the difference calculation unit 163 is greater than or equal to a threshold for each action element, and notifies the artificial intelligence unit 172 of the determination result (step S44C).
 次に学習済モデル172Aは、学習者の視野映像を取り込み、差分が閾値以上になった動作要素と、その差分の大きさを抽出する。学習済モデル172Aは抽出した動作要素およびその差分の大きさと、学習済モデル172Aに学習されている組み合わせデータとを比較することにより、動作要素毎に差分の最適な表示の手法を含む組み合わせデータを選択する(ステップS45C1)。結果表示部172Bは、学習モデル172Aにより選択された組み合わせデータから、各動作要素の差分を表示する手法として最も学習者の訓練に効果的な手法を決定し、その手法を用いて表示調整を行うように表示調整部16Cに指示を送る(ステップS45C2)。表示調整部16Cは、結果表示部172Bから受け取った指示に従い、お手本動画の表示方法を調整して表示部19に表示させる(ステップS45)。ステップS45で処理したお手本動画が学習者に表示されたら、フローはステップS41に戻る。 Next, the learned model 172A captures the visual field image of the learner, and extracts the motion element whose difference is equal to or more than the threshold and the magnitude of the difference. The trained model 172A compares the extracted motion elements and the magnitude of the difference with the combination data trained in the trained model 172A to obtain combination data including the optimum display method of the difference for each motion element. Select (step S45C1). The result display unit 172B determines the most effective method for training the learner as a method for displaying the difference between each motion element from the combination data selected by the learning model 172A, and adjusts the display using the method. To the display adjustment unit 16C (step S45C2). The display adjustment unit 16C adjusts the display method of the model moving image and displays it on the display unit 19 according to the instruction received from the result display unit 172B (step S45). When the model moving image processed in step S45 is displayed to the learner, the flow returns to step S41.
 このように第3実施形態の変形例の学習支援システムでは、学習者の動作と指導者の動作の差分が、動作要素ごとに最適な表示方法により表示されるため、学習者は、表示された差分を見て、どのように自分の動作を変更するべきかを容易に理解することができる。 As described above, in the learning support system according to the modified example of the third embodiment, the difference between the learner's motion and the instructor's motion is displayed by an optimum display method for each motion element, so that the learner is displayed. You can easily understand how you should change your behavior by looking at the differences.
<<第4実施形態の学習支援システム>>
 以下、第4実施形態の学習支援システムについて、第1実施形態の学習支援システムと異なる点を説明する。第3実施形態の学習支援システムは、図27に示すように、メガネ型デバイス201と、メガネ型デバイス201にネットワーク30を介して接続された制御システム101を備え、ネットワーク30には、さらに超音波撮像装置104が接続されている。
<<Learning Support System of Fourth Embodiment>>
Hereinafter, the learning support system of the fourth embodiment will be described by referring to differences from the learning support system of the first embodiment. As shown in FIG. 27, the learning support system according to the third embodiment includes an eyeglass-type device 201 and a control system 101 connected to the eyeglass-type device 201 via a network 30, and the network 30 further includes an ultrasonic wave. The imaging device 104 is connected.
 超音波撮像装置104は、従来の超音波撮像装置と同様の構成であってよい。超音波撮像装置で取得された撮影画像は、メガネ型デバイス201の表示部19に、例えば図28に示すように、学習者102の作業動画とお手本動画が表示されている横に並列して表示することが好ましい。 The ultrasonic imaging device 104 may have the same configuration as the conventional ultrasonic imaging device. The captured image acquired by the ultrasonic image pickup device is displayed side by side on the display unit 19 of the glasses-type device 201, in parallel with the work video of the learner 102 and the model video, as shown in FIG. 28, for example. Preferably.
 超音波撮像装置の利用時、ユーザの視点は、プローブを持つ手元ではなく、撮影画像が表示されている画面にあることが想定される。ユーザは手元の動作と撮影画像を同時に見ることができないため、プローブ操作しながら画面をみることは、技術に不慣れな学習者にとって難しく、習熟を困難にする原因の一つとなっている。 When using the ultrasonic imaging device, it is assumed that the user's viewpoint is not on the hand holding the probe, but on the screen on which the captured image is displayed. Since the user cannot see the motion at hand and the captured image at the same time, it is difficult for a learner unfamiliar with the technique to view the screen while operating the probe, which is one of the causes for making learning difficult.
 第4実施形態では、超音波撮像装置による撮影画像をメガネ型デバイス201により学習者の手元の画像周辺に表示することにより、学習者102の視点の移動が最小限になるため、学習者は、手元の動きとそれによる撮影画像の変化とをより直感的に把握しながら習熟することができる。 In the fourth embodiment, the image captured by the ultrasonic image pickup device is displayed around the image at the learner's hand by the glasses-type device 201, so that the movement of the viewpoint of the learner 102 is minimized. It is possible to learn by intuitively grasping the movement of the hand and the change in the captured image due to the movement.
 なお、第2~第4実施形態において、第1実施形態で説明したセンシングデバイス204が制御システム101に接続されていてもよく、学習者102は訓練の際にセンシングデバイス204を用いてもよい。また、第4実施形態において、超音波撮像装置104のプローブに、センシングデバイス204に含まれるセンサと同様のセンサを組み込み、学習者102が実物のプローブを用いながら、プローブを扱う技術を学べるようにしてもよい。 In the second to fourth embodiments, the sensing device 204 described in the first embodiment may be connected to the control system 101, and the learner 102 may use the sensing device 204 during training. Further, in the fourth embodiment, a sensor similar to the sensor included in the sensing device 204 is incorporated in the probe of the ultrasonic imaging apparatus 104 so that the learner 102 can learn the technique of handling the probe while using the actual probe. May be.
<<学習支援装置>>
 以上の実施形態では、制御システム101を、メガネ型デバイス201に対してネットワーク30を介して接続する例を説明したが、制御システム101の構成は以上の例に限られない。制御システム101はメガネ型デバイス201内のメモリ18内に搭載されていてもよい。また制御システム101は、メガネ型デバイス201に外付け可能なメモリ内に搭載されてもよい。
<< learning support device >>
In the above embodiment, an example in which the control system 101 is connected to the glasses-type device 201 via the network 30 has been described, but the configuration of the control system 101 is not limited to the above example. The control system 101 may be mounted in the memory 18 in the glasses-type device 201. Further, the control system 101 may be mounted in a memory that can be externally attached to the glasses-type device 201.
 本実施形態では、学習支援装置として、メガネ型デバイス201を用いたが、学習支援装置は、少なくともカメラと表示部を備えていればよく、例えばカメラが備えられたヘルメットやヘアバンドと、ユーザの近くに配置されたディスプレイまたは持ち運び可能なディスプレイの組み合わせでもよい。 In the present embodiment, the glasses-type device 201 is used as the learning support device, but the learning support device only needs to include at least a camera and a display unit. For example, a helmet or hair band equipped with a camera and a user's It may be a display arranged nearby or a combination of portable displays.
 また本実施形態では、学習者の学習する技術が超音波撮像装置のプローブ操作の場合について例示したが、本発明はあらゆる技術分野の学習に適用可能である。例えば、手術方法やリハビリ、あるいはスポーツ、伝統芸能における身体の動かし方、介護の方法を学習する際に用いられてもよいし、料理の方法や楽器の演奏方法、文字の書き方の練習等に用いられてもよい。さらに本発明は、配線作業や溶接作業など産業分野における技術学習に用いられてもよい。 Further, in the present embodiment, the case where the technique for learning by the learner is the probe operation of the ultrasonic imaging device has been exemplified, but the present invention can be applied to learning in all technical fields. For example, it may be used for learning surgical methods, rehabilitation, sports, how to move the body in traditional performing arts, nursing care methods, cooking methods, playing musical instruments, practicing how to write letters, etc. You may be asked. Further, the present invention may be used for technical learning in the industrial field such as wiring work and welding work.
12…CPU、13…無線通信デバイス、14…モーションセンサ、15…カメラ、16…演算部、16A…差分判定部、16B…レベル評価部、16C…表示調整部、16D…統計演算部、17…情報格納部、18…メモリ、19…表示部、20…視線センサ、30…ネットワーク、101…制御システム、201、203…メガネ型デバイス、204…センシングデバイス 12 ... CPU, 13 ... wireless communication device, 14 ... motion sensor, 15 ... camera, 16 ... calculation unit, 16A ... difference judgment unit, 16B ... level evaluation unit, 16C ... display adjustment unit, 16D ... statistical calculation unit, 17 ... Information storage unit, 18... Memory, 19... Display unit, 20... Eye sensor, 30... Network, 101... Control system, 201, 203... Glasses type device, 204... Sensing device

Claims (15)

  1.  学習者に装着される表示部と、学習者に装着されて学習者の視野映像を撮像する撮像部と、学習者の作業動作のお手本となる指導者の作業動作の動画であるお手本動画を格納する格納部と、演算部とを備え、
     前記演算部は、前記お手本動画を、前記撮像部が撮像した前記視野映像上に重ね合わせて前記表示部に表示させるとともに、前記視野映像に含まれる学習者の作業動作の特徴に応じて、動的に前記お手本動画の表示内容を変化させることを特徴とする学習支援システム。
    A display unit attached to the learner, an image capturing unit attached to the learner to capture the visual field image of the learner, and a model video which is a video of the work motion of the instructor serving as a model of the work motion of the learner is stored. A storage unit and a calculation unit,
    The arithmetic unit displays the model moving image on the display unit by superimposing the model moving image on the visual field image captured by the image capturing unit, and moves the moving image in accordance with the feature of the work operation of the learner included in the visual field image. A learning support system characterized by changing the display content of the model video.
  2.  前記演算部は、
     前記お手本動画に含まれる指導者の作業動作と前記視野映像に含まれる学習者の作業動作との差分を随時算出する差分算出部と、当該差分に応じて前記お手本動画の表示内容を変更する表示調整部とを含むことを特徴とする請求項1に記載の学習支援システム。
    The arithmetic unit is
    A difference calculation unit that calculates the difference between the work motion of the instructor included in the model video and the work motion of the learner included in the visual field image, and a display that changes the display content of the model video according to the difference. The learning support system according to claim 1, wherein the learning support system includes a coordinating unit.
  3.  前記差分算出部は、前記差分を予め定めた複数の動作要素ごとに算出し、
     前記表示調整部は、前記動作要素ごとの前記差分に応じて、前記お手本動画の表示内容を変化させることを特徴とする請求項2に記載の学習支援システム。
    The difference calculation unit calculates the difference for each of a plurality of predetermined operating elements.
    The learning support system according to claim 2, wherein the display adjustment unit changes the display content of the model moving image in accordance with the difference for each operation element.
  4.  前記差分算出部は、前記差分を予め定めた複数の動作要素ごとに算出し、
     前記表示調整部は、前記複数の動作要素のうち、算出した前記差分が所定値を超えた動作要素を、前記差分が所定値以下の動作要素よりも強調するように前記お手本動画の表示内容を変更することを特徴とする請求項2に記載の学習支援システム。
    The difference calculation unit calculates the difference for each of a plurality of predetermined operating elements.
    Among the plurality of motion elements, the display adjustment unit displays the display content of the model video so that the calculated motion difference exceeds a predetermined value, and that the difference is emphasized more than a motion element having a difference equal to or less than a predetermined value. The learning support system according to claim 2, wherein the learning support system is changed.
  5.  前記差分算出部は、前記差分を予め定めた複数の動作要素ごとに算出し、
     前記表示調整部は、前記複数の動作要素のうち、前記差分が所定値よりも小さく動作要素について、前記表示部に表示される前記お手本動画の透過率を高くすることを特徴とする請求項2に記載の学習支援システム。
    The difference calculation unit calculates the difference for each of a plurality of predetermined operating elements.
    The display adjustment unit increases the transmittance of the model moving image displayed on the display unit for an operation element of which the difference is smaller than a predetermined value among the plurality of operation elements. The learning support system described in.
  6.  前記差分算出部は、前記差分を複数の動作要素ごとに算出し、
     前記表示調整部は、前記複数の動作要素ごとに算出した前記差分から、学習者の習熟度を判定し、判定した習熟度に応じて前記お手本動画の内容を変更することを特徴とする請求項2に記載の学習支援システム。
    The difference calculation unit calculates the difference for each of a plurality of motion elements,
    The display adjustment unit determines a learner's proficiency level from the difference calculated for each of the plurality of motion elements, and changes the content of the model video in accordance with the determined proficiency level. The learning support system described in 2.
  7.  前記演算部は、前記お手本動画を前記表示部に表示させる処理に先立ち、学習者の初期習熟度を判定する習熟度判定部をさらに有し、
     前記表示調整部は、当該初期習熟度に応じて、前記表示部に表示する前記お手本動画の内容を変更することを特徴とする請求項1に記載の学習支援システム。
    The calculation unit further has a proficiency level determination unit that determines the initial proficiency level of the learner prior to the process of displaying the model video on the display unit,
    The learning support system according to claim 1, wherein the display adjustment unit changes the content of the model moving image displayed on the display unit according to the initial proficiency level.
  8.  前記格納部には、前記撮像部により撮像された学習者の視野映像が格納され、
     前記演算部は、前記格納部に格納された学習者の視野映像に、お手本動画が重畳された振り返り動画を再生することを特徴とする請求項1に記載の学習支援システム。
    The learner's visual field image captured by the imaging unit is stored in the storage unit.
    The learning support system according to claim 1, wherein the arithmetic unit reproduces a retrospective moving image in which a model moving image is superimposed on the learner's visual field image stored in the storage unit.
  9.  メガネ型デバイスと、前記メガネ型デバイスにネットワークを介して接続された制御システムとをさらに有し、
     前記表示部および前記撮像部は、学習者により装着されるメガネ型デバイスに備えられ、
     前記演算部および前記格納部は、前記制御システム内に備えられていることを特徴とする請求項1に記載の学習支援システム。
    Further comprising a glasses-type device and a control system connected to the glasses-type device via a network,
    The display unit and the imaging unit are provided on a glasses-type device worn by a learner.
    The learning support system according to claim 1, wherein the calculation unit and the storage unit are provided in the control system.
  10.  前記制御システムにネットワークを介して接続されたセンシングデバイスを更に備え、
     前記演算部は、前記センシングデバイスにより検出された情報に基づいて、前記お手本動画の表示内容を動的に変化させることを特徴とする請求項9に記載の学習支援システム。
    Further comprising a sensing device connected to the control system via a network,
    The learning support system according to claim 9, wherein the arithmetic unit dynamically changes the display content of the model moving image based on the information detected by the sensing device.
  11.  前記制御システムにネットワークを介して接続された、指導者の視野映像を撮像するデバイスを更に備え、
     前記演算部は、前記デバイスが撮像した指導者の視野映像から前記お手本動画を生成することを特徴とする請求項9に記載の学習支援システム。
    Further comprising a device connected to the control system via a network for capturing a visual field image of the instructor,
    The learning support system according to claim 9, wherein the calculation unit generates the model moving image from a visual field image of the instructor captured by the device.
  12.  前記格納部には、学習者の作業動作の特徴と指導者の作業動作の特徴とを複数学習した人工知能部が格納されており、
     前記人工知能部は、学習者の作業動作の特徴に近い特徴をもつ指導者の作業動作のお手本動画を随時選択し、前記視野映像上に重ね合わせるお手本動画を随時変更することを特徴とする請求項1に記載の学習支援システム。
    The storage unit stores an artificial intelligence unit that has learned a plurality of features of the learner's work action and the features of the instructor's work action,
    The artificial intelligence unit selects, as needed, a model video of the work motion of the instructor having characteristics close to the characteristics of the work motion of the learner, and modifies the model video overlaid on the visual field video at any time. The learning support system according to Item 1.
  13.  前記ネットワークには、超音波撮像装置が更に接続されており、
     前記演算部は、前記超音波撮像装置により取得された超音波画像を、前記表示部に表示させることを特徴とする請求項9に記載の学習支援システム。
    An ultrasonic imaging device is further connected to the network,
    The learning support system according to claim 9, wherein the arithmetic unit causes the display unit to display the ultrasonic image acquired by the ultrasonic imaging apparatus.
  14.  学習者に装着される表示部と
     学習者に装着されて学習者の視野映像を撮像する撮像部と、
     学習者の動作のお手本となる指導者の作業動作の動画であるお手本動画を格納する格納部と、
     演算部とを備え
     前記演算部は、前記お手本動画を、前記撮像部が撮像した前記視野映像上に重ね合わせて前記表示部に表示させるとともに、前記視野映像に含まれる学習者の作業動作の特徴に応じて、動的に前記お手本動画の表示内容を変化させることを特徴とする学習支援装置。
    A display unit attached to the learner, an imaging unit attached to the learner to capture the learner's visual field image, and an imaging unit.
    A storage unit that stores a model video that is a video of the work movement of the instructor who is a model of the learner's movement,
    A calculation unit, and the calculation unit causes the display unit to superimpose the model moving image on the visual field image captured by the imaging unit and displays the work motion of a learner included in the visual field image. A learning support device characterized in that the display content of the model moving image is dynamically changed according to the above.
  15.  コンピュータを、
     学習者の動作のお手本となる指導者の作業動作の動画であるお手本動画を、学習者に装着されて学習者の視野映像を撮像する撮像部が撮像した前記視野映像上に重ね合わせて、学習者に装着される表示部に表示させるとともに、前記視野映像に含まれる学習者の作業動作の特徴に応じて、動的に前記お手本動画の表示内容を変化させる演算手段として機能させるプログラム。
    Computer,
    Learning is performed by superimposing a model video, which is a video of the instructor's work motion, which is a model of the learner's operation, on the visual field image captured by the image capturing unit attached to the learner and capturing the visual field image of the learner. A program that is displayed on a display unit worn by a person and that functions as a calculation unit that dynamically changes the display content of the model moving image according to the characteristics of the learner's work motion included in the visual field image.
PCT/JP2019/042527 2019-03-06 2019-10-30 Learning assist system, learning assist device, and program WO2020179128A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-040764 2019-03-06
JP2019040764A JP7116696B2 (en) 2019-03-06 2019-03-06 Learning support system and program

Publications (1)

Publication Number Publication Date
WO2020179128A1 true WO2020179128A1 (en) 2020-09-10

Family

ID=72337841

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/042527 WO2020179128A1 (en) 2019-03-06 2019-10-30 Learning assist system, learning assist device, and program

Country Status (2)

Country Link
JP (1) JP7116696B2 (en)
WO (1) WO2020179128A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022190774A1 (en) * 2021-03-11 2022-09-15 株式会社Nttドコモ Information processing device
WO2022257625A1 (en) * 2021-06-07 2022-12-15 Huawei Technologies Co.,Ltd. Device and method for generating haptic feedback on a tactile surface
WO2023242981A1 (en) * 2022-06-15 2023-12-21 マクセル株式会社 Head-mounted display, head-mounted display system, and display method for head-mounted display

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6899105B1 (en) * 2020-10-13 2021-07-07 株式会社PocketRD Operation display device, operation display method and operation display program
WO2022180894A1 (en) * 2021-02-24 2022-09-01 合同会社Vessk Tactile-sensation-expansion information processing system, software, method, and storage medium
JP7345866B2 (en) * 2021-03-23 2023-09-19 国立大学法人 東京大学 Information processing system, information processing method and program
JP2023134269A (en) * 2022-03-14 2023-09-27 オムロン株式会社 Work support device, work support method and work support program
JP2023177620A (en) * 2022-06-02 2023-12-14 パナソニックIpマネジメント株式会社 Information processing method and system
JP7412826B1 (en) 2023-07-28 2024-01-15 株式会社計数技研 Video compositing device, video compositing method, and program
JP7432275B1 (en) 2023-07-28 2024-02-16 株式会社計数技研 Video display device, video display method, and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5846086A (en) * 1994-07-01 1998-12-08 Massachusetts Institute Of Technology System for human trajectory learning in virtual environments
JP2006302122A (en) * 2005-04-22 2006-11-02 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, user terminal therefor and exercise support program
JP2013088730A (en) * 2011-10-21 2013-05-13 Toyota Motor East Japan Inc Skill acquisition supporting system and skill acquisition support method
WO2015097825A1 (en) * 2013-12-26 2015-07-02 独立行政法人科学技術振興機構 Movement learning support device and movement learning support method
JP2015229052A (en) * 2014-06-06 2015-12-21 セイコーエプソン株式会社 Head mounted display device, control method for the same, and computer program
JP2016077346A (en) * 2014-10-10 2016-05-16 セイコーエプソン株式会社 Motion support system, motion support method, and motion support program
JP2019012965A (en) * 2017-06-30 2019-01-24 富士通株式会社 Video control method, video control device, and video control program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5846086A (en) * 1994-07-01 1998-12-08 Massachusetts Institute Of Technology System for human trajectory learning in virtual environments
JP2006302122A (en) * 2005-04-22 2006-11-02 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, user terminal therefor and exercise support program
JP2013088730A (en) * 2011-10-21 2013-05-13 Toyota Motor East Japan Inc Skill acquisition supporting system and skill acquisition support method
WO2015097825A1 (en) * 2013-12-26 2015-07-02 独立行政法人科学技術振興機構 Movement learning support device and movement learning support method
JP2015229052A (en) * 2014-06-06 2015-12-21 セイコーエプソン株式会社 Head mounted display device, control method for the same, and computer program
JP2016077346A (en) * 2014-10-10 2016-05-16 セイコーエプソン株式会社 Motion support system, motion support method, and motion support program
JP2019012965A (en) * 2017-06-30 2019-01-24 富士通株式会社 Video control method, video control device, and video control program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TATSUYA KOBAYASHI: "An approach to the support aimed at acquiring skills by using augmented reality and motion data", IPSJ SIG TECHNICAL REPORT. CLE, vol. 2016-CLE-19, no. 14, 13 May 2016 (2016-05-13), pages 1 - 4, XP009523498, ISSN: 2188-8620 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022190774A1 (en) * 2021-03-11 2022-09-15 株式会社Nttドコモ Information processing device
WO2022257625A1 (en) * 2021-06-07 2022-12-15 Huawei Technologies Co.,Ltd. Device and method for generating haptic feedback on a tactile surface
US11714491B2 (en) 2021-06-07 2023-08-01 Huawei Technologies Co., Ltd. Device and method for generating haptic feedback on a tactile surface
WO2023242981A1 (en) * 2022-06-15 2023-12-21 マクセル株式会社 Head-mounted display, head-mounted display system, and display method for head-mounted display

Also Published As

Publication number Publication date
JP2020144233A (en) 2020-09-10
JP7116696B2 (en) 2022-08-10

Similar Documents

Publication Publication Date Title
WO2020179128A1 (en) Learning assist system, learning assist device, and program
Bark et al. Effects of vibrotactile feedback on human learning of arm motions
US11508344B2 (en) Information processing device, information processing method and program
JP6162259B2 (en) Motion learning support device and motion learning support method
US20170221379A1 (en) Information terminal, motion evaluating system, motion evaluating method, and recording medium
US20100167248A1 (en) Tracking and training system for medical procedures
JP2021531504A (en) Surgical training equipment, methods and systems
US20150004581A1 (en) Interactive physical therapy
WO2013154764A1 (en) Automated intelligent mentoring system (aims)
JP7157424B2 (en) INTERACTIVE INFORMATION TRANSMISSION SYSTEM AND INTERACTIVE INFORMATION TRANSMISSION METHOD AND INFORMATION TRANSMISSION SYSTEM
JP7082384B1 (en) Learning system and learning method
WO2020152779A1 (en) Rehabilitation system and image processing device for higher brain dysfunction
WO2013161662A1 (en) Motion guide presentation method and system therefor, and motion guide presentation device
US20220277506A1 (en) Motion-based online interactive platform
JP6014450B2 (en) Motion learning support device
US20190355281A1 (en) Learning support system and recording medium
WO2020082181A1 (en) Precise teleguidance of humans
US20230169880A1 (en) System and method for evaluating simulation-based medical training
Fitzgerald et al. Usability evaluation of e-motion: a virtual rehabilitation system designed to demonstrate, instruct and monitor a therapeutic exercise programme
Tadayon A person-centric design framework for at-home motor learning in serious games
JP2020134710A (en) Surgical operation training device
Webel Multimodal Training of Maintenance andAssembly Skills Based on Augmented Reality
JP7281924B2 (en) Information transmission system
US20230384864A1 (en) Skill acquisition assistance method, skill acquisition assistance system, and computer readable recording medium storing control program
Krauthausen Robotic surgery training in AR: multimodal record and replay

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19917783

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19917783

Country of ref document: EP

Kind code of ref document: A1