WO2015097825A1 - 動き学習支援装置及び動き学習支援方法 - Google Patents
動き学習支援装置及び動き学習支援方法 Download PDFInfo
- Publication number
- WO2015097825A1 WO2015097825A1 PCT/JP2013/084957 JP2013084957W WO2015097825A1 WO 2015097825 A1 WO2015097825 A1 WO 2015097825A1 JP 2013084957 W JP2013084957 W JP 2013084957W WO 2015097825 A1 WO2015097825 A1 WO 2015097825A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- learning support
- movement
- teaching material
- motion
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B23/00—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
- G09B23/28—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
- G09B23/285—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for injections, endoscopy, bronchoscopy, sigmoidscopy, insertion of contraceptive devices or enemas
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1124—Determining motor skills
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/162—Testing reaction times
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
- A61B5/4082—Diagnosing or monitoring movement diseases, e.g. Parkinson, Huntington or Tourette
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B11/00—Teaching hand-writing, shorthand, drawing, or painting
- G09B11/04—Guide sheets or plates; Tracing charts
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B15/00—Teaching music
Definitions
- the present invention relates to a movement learning support apparatus and a movement learning support method for learning movement through a teaching material video relating to movement, such as laparoscopic surgery training.
- Patent Document 1 a target image 11 captured in advance by a TV camera and stored in a digital memory is displayed on the foreground (front side) of the TV monitor, while an image 12 of a person taking a lesson is displayed on the back of the same monitor.
- a technique is described in which the operations of both are visually compared by displaying them simultaneously on the ground (back).
- Patent Document 2 describes an action learning apparatus that provides an effective learning method by generating an image of a teacher's action in the same frame as a reference frame of a student image.
- this apparatus includes a host computer 12, a monitor 10, a motion sensing device 15, and a sensor 20, and in the first step of training, a motion sequence of a teacher is obtained and monitored along the motion trajectory of the teacher. A virtual image of the trajectory of the imitation movement of the student is displayed on 10. And if imitation ability improves, it will be described that the computer 12 will automatically increase the speed of a teacher's operation
- Non-Patent Document 1 describes a laparoscopic surgery training device in which a teaching material video is divided into motion breaks of about 10 milliseconds and allows partial learning. The device first plays back one segment of video on the monitor, and each time the segment ends, it switches to the image of the learner's forceps. The last frame of the video is displayed in a semi-transparent manner for 0.5 msec every 1.2 msec. Then, in the state where the forceps have reached the target posture, the foot pedal is operated to advance to the next action segment, and the next model image starts to be reproduced from the same posture. A subjective report has been obtained that when the video is switched between self and other, the forceps begin to move freely and appear to proceed to the next procedure.
- Non-Patent Document 2 describes that a system that records and reproduces an insertion procedure of a skilled physician in time series is added to enable the experienced physician to experience and learn the insertion procedure.
- a quantitative evaluation method for insertion procedures and made it possible to conduct effective training by evaluating the techniques of trainees.
- (1) Teaching (exemplary insertion procedures using a record reproduction system) (Teaching), (2) Trial (insertion of an endoscope), (3) Evaluation (presentation of each evaluation value obtained as a result of insertion) is described.
- Patent Documents 1 and 2 and Non-Patent Document 2 do not advance learning by dividing a series of operations into a plurality of images and displaying them on a monitor in turn and alternately with self-images. Therefore, the relationship between the evaluation of the category and the transition to the next category is not described at all.
- Non-Patent Document 1 describes that the learning video is advanced to the next motion break by the learner operating the pedal, and the motion break of the learning material video is changed by another method. That is not described.
- the object of the present invention is to automatically switch to the next category of teaching material video to effectively improve the learning effect when the learner's movement is close to that of the teaching material video, in order to effectively improve the image clarity by the illusion effect. It is to support improvement.
- the motion learning support apparatus includes each segment teaching material video of motion teaching videos divided in the time direction, and a learner's practice image imitating the motion of each segment teaching material video captured by the imaging unit.
- a motion learning support apparatus that alternately displays on a monitor for each category, a motion detection unit that detects the learner's motion, an evaluation unit that evaluates the similarity of the learner's motion to the motion of the segment teaching material video, And a first learning support processing means for switching the monitor from the image of the imaging unit to the next segmented teaching material video when the evaluation is similar by the evaluation means.
- the motion learning support method includes a segmented teaching material video of the teaching material video related to motion, which is segmented in the time direction, and a learner's practice image imitating the motion of each segmented teaching material image captured by the imaging unit.
- the motion learning support method for alternately displaying each of the sections on the monitor the similarity of the learner's movement with respect to the movement of the segment teaching material video is evaluated by detecting the learner's movement, and the evaluation is similar Is obtained, the monitor is switched from the image of the imaging unit to the next segmented teaching material video.
- the learner's movement when the learner's movement is detected, the similarity of the learner's movement with respect to the movement of the segmented teaching material video is evaluated. It can be switched to the next category teaching material video. Therefore, when the learner's movement is similar to, or close to, the movement of the segmented teaching material video, the video is switched in a timely manner so that the movement of the learner's video is linked from the practice video to the next segmented teaching material video. , The learner has the illusion of starting the next segment of movement with the same movement as the model, and can remember the movement as a clearer and more detailed action image. Further improvement can be achieved.
- the present invention by automatically advancing the segmented teaching material video to the next segment, it is possible to improve the image clarity by the illusion effect induced by the learner and improve the learning effect.
- FIG. 1 It is a schematic block diagram which shows one Embodiment of the movement learning assistance apparatus which concerns on this invention. It is a circuit block diagram which shows one Embodiment of the movement learning assistance apparatus which concerns on this invention. It is a figure which shows the example of the frame image
- FIG. 1 is a schematic configuration diagram showing an embodiment of a motion learning support apparatus according to the present invention.
- the movement learning support apparatus 1 can be applied to an apparatus that learns various movements.
- the motion learning support device 1 will be described as a device applied to support for learning (practice) of ligature suture motion using forceps in laparoscopic surgery.
- the movement learning support device 1 includes a practice table device 10 and an information processing device 20.
- the practice table device 10 includes a dry box 11 that simulates a patient's torso, a base 12 on which the dry box is placed at a predetermined position above, and a monitor 13 that displays an image.
- the dry box 11 has a cylindrical body 111 that is horizontally oriented and has a flat bottom surface.
- the housing 111 has an upper surface portion 112 formed in a convex curved surface so as to simulate the abdomen of a human body.
- the upper surface portion 112 is made of an opaque or translucent member, and a plurality of circular holes are formed at appropriate positions.
- a film material 113 such as a resin material is affixed to the circular hole, and a cross-shaped notch simulating a cylindrical trocar for inserting and removing forceps and an endoscope camera, which will be described later, is formed in the center of the circular hole. ing.
- positions the structure which simulated the trocar may be sufficient.
- the forceps 114 is inserted from above through the cross-shaped cut of the film material 113.
- the forceps 114 has a pair of finger insertion portions on the hand side and a pair of openable / closable action portions (a pair of clamping members) on the other side. They are connected by a pipe portion having a predetermined length.
- the pipe part is internally provided with a mechanism for transmitting the operation force at the finger insertion part to the action part.
- a pair of action portions at the front end is opened and closed in conjunction with an opening / closing operation from the finger insertion portion by the user, so that the object can be clamped or opened.
- the forceps 114 can open a working portion so that a later-described needle and suture thread can be introduced between the pair of working portions, and then the pair of working portions are closed to sandwich the needle and suture thread. It can be done.
- a dummy affected part such as a resin material (not shown) is disposed inside the dry box 11.
- the dummy affected part is made of, for example, a rectangular parallelepiped shape, and a dummy incision part is formed on the upper surface thereof so as to be sewn.
- the membrane material 113 is used to insert and remove only the forceps 114, and the endoscope camera 115 is fixed at an appropriate position of the dry box 11.
- the endoscopic camera 115 presents the treatment space to the learner so that the treatment space can be observed.
- the distal end action of the forceps 114 is set so that the dummy affected area is in the center of the field of view and sutures the dummy affected area. The movement of the part is imaged.
- the base 12 is provided with a foot pedal as an example of the operation unit 121 at an appropriate position, for example, at the bottom.
- the operation unit 121 includes a swingable structure and a switch that is turned on by a stepping operation.
- the exercise table device 10 is provided with a motion detector 116.
- the motion detection unit 116 includes a magnetic generator 1161 and a magnetic sensor 1162.
- the magnetic generator 1161 is a magnetic signal generation source, and is fixed at an appropriate position on the base 12.
- the magnetic sensor 1162 is attached to a predetermined portion of the forceps 114 so as to be directed in three axial directions. By detecting magnetic signals generated from the magnetic generator 1161 corresponding to the three axes, the three-dimensional position and orientation are detected.
- the position and orientation (orientation) of the action part (attention site) at the tip of the forceps 114 based on the information detected by the motion detector 116 using the attachment position and orientation to the forceps 114 and the known shape and size of the forceps 114. Is calculated by a motion information acquisition unit 215 to be described later. It should be noted that the position and orientation (orientation) of the action portion at the tip of the forceps 114 can be determined by adopting a three-dimensional acceleration sensor instead of the magnetic motion detector 116 or by analyzing a captured image of the endoscope camera 115. You may make it detect.
- the monitor 13 is attached to the base 12 in this embodiment.
- the monitor 13 is disposed at a position that is easy for the learner to see during movement learning using the dry box 11, preferably at a position corresponding to the rear portion of the base 12 and the viewpoint height of the learner.
- the information processing apparatus 20 inputs / outputs information to / from the practice base apparatus 10, and uses the input information and other information to create a motion learning teaching material and send it to the monitor 13 as will be described later. To do.
- FIG. 2 is a circuit configuration diagram showing an embodiment of the motion learning support apparatus according to the present invention.
- the practice base apparatus 10 is provided with a microphone 1151 and a speaker 131.
- the microphone 1151 captures tips and guidance for movements of model performers as voice information at the time of creating learning materials.
- the speaker 131 is for reproducing the audio information acquired by the microphone 1151 during learning.
- the information processing apparatus 20 includes a control unit 21 composed of a CPU (Central Processing Unit), a ROM (Read Only Memory) 22 connected to the control unit 21, a RAM Random Access Memory, an operation unit 24, and as necessary.
- a communication unit 25 is provided.
- the ROM 22 stores necessary processing programs and information necessary for executing the programs in advance, and the RAM 23 executes information processing and temporarily stores processing information.
- the RAM 23 also stores a follow-up learning teaching material storage unit 231 that stores a series of model motion videos, and segmented teaching material videos that are divided from the series of model motion videos into video for each motion element as described below. And a procedural learning material storage unit 232.
- the operation unit 24 includes a touch panel, a mouse, a keyboard, and the like, and performs operation instructions necessary for creating learning materials and executing learning support processing.
- the communication unit 25 when two motion learning support devices are connected, a model person is displayed on one side, and a learner on the other side is synthesized and displayed as a live video (in this embodiment, time division display method). In addition, they are used to exchange video information with each other.
- the information processing apparatus 20 is configured such that the procedure learning video creation unit 211, the procedure learning video display processing unit 212, the follow-up learning video display processing unit 213, the display form setting unit 214, the motion information are executed by executing the processing program in the ROM 22 on the CPU. It functions as an acquisition unit 215, an evaluation unit 216, and a communication processing unit 217 employed as necessary.
- a model operation of ligation stitching by a model person is performed in advance on the practice table device 10, and the state of this model operation is acquired by the endoscope camera 115 and the microphone 1151 and stored in the following learning material storage unit 231 of the RAM 23.
- it may be stored in the follow-up learning teaching material storage unit 231 from the external recording medium in which the teaching material video (including audio if necessary) of the ligation suture treatment is stored in advance via the teaching material capturing unit 30 or the communication unit 25.
- information on the position and orientation of the distal end working portion of the forceps 114 in the timing direction is sequentially acquired from the motion detection unit 116 and stored in the same manner as the teaching material video.
- the procedural learning video creation unit 211 creates a procedural learning video (teaching material video) from an image of a ligation suture treatment by a model person.
- the procedure learning image creation unit 211 reproduces an image of a ligation suture treatment by a model person, and when a division condition such as a scene in which the movement of the forceps 114 substantially stops or a voice guide is interrupted during the reproduction, It is determined that the motion element is separated, and the video portion up to that time is divided (cut out), and a process number is assigned by, for example, a serial number.
- the divided teaching material videos having the motion elements corresponding to the divided time widths are sequentially written in the procedure learning material storage unit 232 corresponding to the procedure numbers.
- Whether or not the movement of the forceps 114 being reproduced is almost stopped may be determined by picking up the image of the forceps 114 in the image and determining the movement of the image. Based on the three-dimensional information, the motion information acquisition unit 215 described later may perform the processing. A mode in which the movement of the distal end working portion of the forceps 114 is determined may be used.
- the procedure learning material storage unit 232 may be written with information (frame information) indicating the cutout position of each divided learning material video instead of writing the divided learning material video.
- a series of movements of the forceps 114 (which may include movement of the needle and suture thread) in the ligation suture treatment includes a plurality of movement elements (each corresponding to each procedure) in the time direction. . For example, (1) the movement of holding the needle with forceps (procedure number 1), (2) the movement of inserting the needle (procedure number 2), (3) the movement of pulling the suture thread to form the letter C (procedure number) 3), (4) Two-turn movement (procedure number 4), (5) Ligation movement (procedure number 5).
- FIG. 3 is a reference diagram showing an example of a frame image in procedure learning.
- A is a frame image for picking up a needle corresponding to procedure number 1
- B is a frame image in which a needle corresponding to procedure number 2 is inserted.
- C is a picture of a frame image showing a state in which a suture thread corresponding to procedure number 3 is wound.
- an image captured by the endoscope camera 115 is displayed on the monitor screen 13 a
- a dummy affected part image 1171 is displayed on the entire screen
- an incision part image 1181 is displayed substantially at the center.
- a forceps image 1141, a needle image 1191 held between the distal ends of the forceps 114, and a suture thread image 1192 connected to one end of the needle image 1191 are displayed.
- the procedure learning video display processing unit 212 receives a procedure learning instruction and shifts to the procedure learning video display mode, thereby reading the segment learning material video from the procedure learning material storage unit 232 according to the assigned procedure number and displaying it on the monitor 13. To do. Thereby, the learner understands the learning point of the movement element in each procedure.
- the procedure learning video display processing unit 212 switches the monitor 13 to the endoscope camera 115 side every time reproduction of one segmented teaching material video is completed. Thereby, the learner can imitate the movement element in the segmented teaching material image observed immediately before.
- the procedure learning video display processing unit 212 when the reproduction of one segmented teaching material video is completed, stops the last frame image of the segmented teaching material video reproduced immediately before in the captured video (live video) of the endoscope camera 13. Display video together.
- the side-by-side display preferably employs a visual field synthesis method. This makes it easier for the learner to grasp the final trace position of the forceps 114 within the section.
- FIG. 4 is a diagram showing an example of the combined display.
- the solid line portion shows the practice video being captured (live) by the endoscope camera 115
- the broken line portion indicates the blinking state, which is an example of the final still image in the same segmented teaching material video.
- the practice video is a situation in the middle of the procedure number (1)
- the broken line portion is a still video at the end of the procedure number (1) (here, the last frame). Therefore, the learner tries to move and match the image 1141 of the forceps 114 to the still image 1141 ′ that is the final tracking position through his practice video.
- the combined display is made to display only the model's forceps image 1141 '.
- the pick-up of the forceps image 1141 ′ can be performed, for example, by using illumination light at the time of shooting the model image so that light is applied to the forceps 114, and setting a threshold value in the luminance direction in the final frame image. .
- the final frame video is set to a predetermined frequency, for example, 0.
- blinking display at several to several Hz, 1 Hz in this example, it is easy to distinguish from the captured image of the endoscope camera 115.
- the blinking display can be realized by, for example, displaying the final frame image on the monitor 13 at predetermined time intervals.
- the procedure learning video display processing unit 212 switches the display of the next segmented teaching material video from the endoscope camera 115 in a different manner according to the evaluation by the motion information acquisition unit 215 and the evaluation unit 216 described later. If the evaluation by the evaluation unit 216 is high, the procedure learning video display processing unit 212 automatically switches the display video on the monitor 13 to the next segmented teaching material video. This is because, in the visual field synthesis method, when the learner brings the image 1141 of the forceps 114 operated by himself / herself to the position overlapping the forceps image 1141 ′, the movement of the image 1141 of the forceps 114 is changed from the practice image to the next segmented teaching material image.
- the procedure learning video display processing unit 212 does not perform automatic switching, and receives the signal of the operation unit 121 that is a foot pedal to perform switching to the next segmented teaching material video. ing.
- the monitor 13 is switched every time the operation unit 121 is operated, and the next segmented teaching material video is read from the procedure learning material storage unit 232 to the monitor 13.
- repeat processing is performed to return to the first procedure number.
- the operation on the operation unit 121 is performed in a different manner, for example, if the operation is performed twice or another pressing operation using the length of the pressing time, the same segmented teaching material video is replayed or the procedure is performed. It can also be handled as an instruction to end the learning video display mode. Alternatively, another operation member may be provided.
- the follow-up learning video display processing unit 213 receives a follow-up learning instruction and shifts to the follow-up learning video display mode, thereby monitoring a series of teaching material videos in the following learning material storage unit 231 and a practice video of the endoscope camera 115. 13 is synthesized and displayed here, alternately in time division.
- FIG. 5 schematically shows a part of this display mode.
- (A) shows each frame of a series of teaching material videos in the follow-up learning teaching material storage unit 231, and (B) shows a practice video of the endoscope camera 115. Each frame. On the monitor 13, for example, the video is rewritten at a frame period of 16.7 ms (1/60 Hz).
- FIG. 5 shows that (A) video continuous in the time direction is alternately switched for a display time corresponding to the number of A frames, and (B) video continuous in the time direction is switched for a display time corresponding to the number of B frames. Is displayed. Accordingly, the display period (one cycle) is a time corresponding to the number of (A + B) frames, and the ratio of the other (learner / exemplary) is (B / A).
- the display form setting unit 214 can change and set at least one of the display cycle and the self-other ratio in the follow-up learning video display mode via the operation unit 24.
- the follow-up learning video display processing unit 213 adopts a time-division display method as a video synthesis method in this embodiment, but a superimposition method for superimposing both videos and a parallel display method for writing them in parallel depending on the application target. Is conceivable.
- a time-division display method as a video synthesis method in this embodiment, but a superimposition method for superimposing both videos and a parallel display method for writing them in parallel depending on the application target.
- the follow-up characteristic at the time of follow-up of forceps overlapping in a complicated manner with a small displacement is deteriorated.
- correspondence between a large number of feature points is taken, it is considered that complicated switching of the attention between the learner's video and the model's video deteriorates the tracking characteristic.
- the time division method of alternately switching and presenting the video is preferable.
- an illusion phenomenon in which the learner maintains the same feeling as the model and also causes a sense of fusion that is, in the time-division method, when performing coordinated physical exercise, the voluntary nature of self-motion is not lost by alternately switching the video of the same viewpoint of the self-video and the teaching material video on the monitor 13, In addition, there is induction of exercise that naturally aligns the exercise with the opponent.
- the movement parts of the two displayed sequentially in the field of view merge to create an illusion that they are one's own movement part. Is possible.
- the sense of fusion refers to the impression of voluntary and involuntary fusion that moves as expected, even though it is a movement site on its own side. In other words, it brings about a subjective feeling that it is not visible in the motion part of the teaching material video, and cannot be seen in anything other than its own motion part. As a result, it is considered that the self-side achieves the multipoint association and the execution of the tracking under the consciousness that the tracking error can be tracked in a state where the tracking error is not clearly recognized or cannot be recognized.
- the applicant of the present application is a period for producing the above-described effect when the self-motion image captured from the same viewpoint and the image to be followed are displayed on the monitor by the time division method. Is approximately 2 Hz to 4 Hz in terms of frequency, and the self-other ratio is preferably 1: 1 to 1: 3. Under this condition, the tracking accuracy can be made higher by the occurrence of the fusion feeling.
- the motion information acquisition unit 215 uses the sensor data from the motion detection unit 116 described above and the motion information (position, orientation, displacement speed, and further parameters such as displacement acceleration of the forceps 114). Is obtained by using a vector, and the speed and acceleration are obtained using sensor data at a plurality of points in time, for example, at a predetermined period.
- the evaluation unit 216 determines the learner's forceps from the difference between the movement information of the forceps 114 in the teaching material image acquired in advance and the movement information of the tip action portion of the learner's forceps 114 obtained by the movement information acquisition unit 215. Perform a skillful evaluation of the operation. Further, the evaluation unit 216 determines whether or not automatic switching from the practice video to the next segmented teaching material video is performed on the monitor 13 from the evaluation result. The determination result is output to the procedure learning video display processing unit 212, and an instruction is given as to whether or not automatic switching is to be performed.
- a variety of methods can be used to determine whether or not automatic switching is appropriate. For example, depending on the application and required accuracy, all the parameters of the motion information described above may be adopted, only the position information is used, the position and orientation information is used, or the position, orientation and A case where information on the displacement speed is used is conceivable.
- evaluation is performed in two stages. More specifically, in the first stage as the previous stage, a change state such as a difference between each parameter of movement information between the learner side and the modeler side is monitored, and the learner's forceps operation is performed in accordance with the current class teaching material. It is determined whether or not the movement corresponding to the movement at the end of the video is approaching. In the second stage for evaluation, after the judgment in the check stage is affirmed, the difference between each parameter of both motion information and the skill using each parameter are calculated, for example, the norm for variance evaluation is calculated. By using the function, the learner's movement is naturally connected to the model's movement by switching to the next segmented teaching material video (ie, both movements in position, orientation, displacement speed, and displacement acceleration).
- the displacement acceleration is as if the forceps 114 operated by the learner in the current category is the same as the model for the behavior of the forceps 114 at the start of the next segment after switching the image. Although it is adopted to more accurately evaluate whether or not the illusion is as if it started with the same movement, it may be evaluated by the displacement speed without using the displacement acceleration.
- FIG. 6 is a flowchart showing an embodiment of the procedure learning material creation process.
- reproduction of the original teaching material performed by the model is started (step S1).
- it is determined whether or not at least one of the motion speed 0 (including substantially zero) and the sound interruption in the motion detection information is satisfied as a division condition during reproduction (step S3). If the division conditions are met, the video from the beginning to the current time, or if the division conditions are met in the first and subsequent steps, the video from the previous division location to the current time is cut out, and the procedure number is assigned sequentially. It is stored in the learning material storage unit 232 (step S5).
- the video may be actually cut out and stored individually, but the division part information (storage address) may be recorded in association with the procedure number, and the same division processing may be performed by address management.
- step S7 it is determined whether or not the reproduction is finished. If completed, this flow ends.
- step S3 if there is no division condition in step S3, the process proceeds to step S7, and on the condition that the reproduction is not finished, the process returns to step S3 and the same cutout process is repeated.
- FIG. 7 is a flowchart showing an embodiment of the procedural learning material display mode process.
- the segmented teaching material video corresponding to the procedure number i is read from the procedure learning material storage unit 232 and displayed (reproduced) on the monitor 13 (step S13).
- Step S17 if the reading of the segmented teaching material video is completed, the monitor 13 is switched to the live video of the endoscope camera 115, and the final frame image of the segmented teaching material video of the procedure number i is displayed in a flashing state. (Step S17).
- step S19 it is determined whether or not the procedure number i is final (procedure number I) (step S19). If it is final, the process proceeds to step S31. If it is not final, measurement and evaluation calculation processing is executed (step S19). S21). Measurement and evaluation calculation processing will be described later.
- step S23 it is determined whether or not an automatic switching instruction is output. If the automatic switching instruction is output, the procedure number i is incremented by 1 (step S25). The video display of the endoscopic camera 115 is stopped, and the incremented teaching material video of the procedure number i after the increment is reproduced on the monitor 13 (step S27).
- step S29 it is determined whether or not there is an operation on the operation unit 121 (step S29).
- the procedure number i is incremented by 1 (step S25)
- the video display of the endoscope camera 115 is stopped, and the incremented teaching material of the procedure number i after the increment is made.
- the video is reproduced on the monitor 13 (step S27).
- step S29 it is determined that there is a replay instruction operation, the segmented teaching material video having the same procedure number i is reproduced again on the monitor 13 (step S33).
- step S19 if the procedure number i is final, it is determined whether or not a replay instruction has been operated (step S31). If there is a replay instruction, the process proceeds to step S33. Consider this as the end of learning and finish this flow. Although not shown in the figure, if the instruction operation such as forced termination can be accepted, the practice can be terminated halfway.
- FIG. 8 is a flowchart showing an embodiment of the measurement and evaluation calculation process in step S21.
- the check operation in the first stage is executed (step S41), and when the determination in the check operation is affirmed, the prediction operation in the second stage is executed (step S43).
- FIG. 9 is a flowchart showing an example of the check calculation process in the first stage.
- sensor data from the motion detection unit 116 is acquired for the forceps 114 operated by the learner (step S51).
- each parameter of motion information is calculated using the acquired sensor data (step S53).
- a change state such as a difference between both parameters of the calculated motion information and the model side motion information is monitored (step S55), and the learner's forceps operation changes to the final motion of the current teaching material video. It is determined whether or not the corresponding movement is approaching (step S57). When it is determined that the movement corresponding to the movement in the final stage is approaching, the process returns.
- step S51 if it is not determined that the movement corresponding to the movement in the final stage is approaching, the process returns to step S51 and the same checking process is repeated. It should be noted that, when an operation to the operation unit 121 occurs as an interrupt during the iterative process from step S57 to step S51, the process is set to proceed to step S29.
- FIG. 10 is a flowchart showing an example of the prediction calculation process in the second stage.
- sensor data from the motion detection unit 116 is acquired for the forceps 114 operated by the learner (step S61).
- motion information parameters are calculated using the acquired sensor data, and a motion prediction calculation is further performed to calculate a norm (step S63).
- the calculated parameters are compared and the norms are compared (step S65). For example, on the condition that the difference between the two parameters is within a certain range, it is determined whether or not the difference between both norms is within the threshold range (step S67). If it is within the allowable range, an automatic switching instruction is output (step S69).
- step S71 determines whether or not it is in an unacceptable range. If it is not in an unacceptable range, the process returns to step S61 and the same prediction process is repeated. On the other hand, if it is in an unacceptable range, an instruction to disable automatic switching is output (step S73). Note that steps S71 and S73 may be omitted, and it may be set to proceed to step S29 when an operation to the operation unit 121 occurs as an interrupt during the repetitive processing returning from step S67 to step S61.
- FIG. 11 is a flowchart showing an embodiment of the follow-up learning material display mode process.
- a time division switching display process is executed (step S101).
- This time-division switching display process can be executed as many times as desired by accepting an instruction operation to the operation unit 121 according to the intention of the learner, and can be ended (step S103).
- FIG. 12 is a flowchart showing an embodiment of the time-division switching display process in the follow-up learning material display mode process.
- the teaching material video is read from the following learning material storage unit 231 and output to the monitor 13 and the number of frames is counted (step S111).
- step S111 the number of frames is counted
- step S113 when display is performed for the number of A frames (Yes in step S113), reading of the teaching material video is interrupted when the number of A frames is read (step S115).
- the monitor 13 is switched to the endoscope camera 115 side, the live video of the endoscope camera 115 is displayed on the monitor 13, and the frame number counting process is performed (step S117).
- the monitor 13 is switched to the teaching material video side, and the teaching material video is sent to the monitor 13 from the address location in the following learning material storage unit 231 that was interrupted immediately before.
- the number of frames is output and the number of frames is counted (step S121).
- step S113 If the reading of the teaching material video has not been completed, the processing returns to step S113, and similarly, the teaching material video having the number of A frames and the number of B frames are returned. The process of alternately displaying live video of the endoscope camera on the monitor 13 is repeated. On the other hand, if the reading of the teaching material video has been completed, this flow ends.
- the A frame is read from the following learning material storage unit 231 (Yes in step S113), but frame images are continuously read from the following learning material storage unit 231.
- the live video of the endoscopic camera 115 is displayed, it may not be guided to the monitor 13 or may be displayed without being displayed. By doing in this way, the change of the time direction of the frame image from the following learning material storage unit 231 corresponds to the real time.
- the present invention may adopt the following aspects.
- the present invention is not limited to this, and the present invention is also applicable to a specialized medical treatment performed with an endoscope camera.
- teleoperation using a surgical robot while remotely observing the affected area image via a monitor is also specialized, and since the number of doctors that can be handled is small, it can be applied to various medical procedures in such remote surgery. is there.
- the application range of the present invention is not limited to the medical field, and can be applied to other fields.
- various operation elements for example, the production process of a product (work), and a series of operations are completed through these operation elements to complete a product etc.
- a field is envisaged.
- the present invention can also be applied to teaching materials for cooking training in which a specific cooking is performed on each ingredient and finally a target dish is completed. Furthermore, it can be applied to various types of training in sports.
- the monitor 13 is arranged in front of the learner.
- a video see-through head mounted display VST-HMD as a device that realizes visual guidance of body movement is used.
- adopt may be sufficient.
- a delay process for setting a predetermined time lag is performed by the information processing apparatus 20, so that a sense of force can be given to the learner, and a more realistic feeling can be provided. A certain learning can be expected. Further, by interposing a digital filter that performs differential processing on the video, the edge of the video can be enhanced, and video recognition is improved.
- switching from procedural learning to follow-up learning is performed by an instruction to the operation unit 121. Instead, however, follow-up learning is automatically performed after a lapse of a certain time from the end of procedural learning. You may make it transfer. Further, as a condition for shifting to the follow-up learning, it may be limited in the case of procedural learning when all of the segmented teaching material videos are performed with an automatic switching instruction.
- the movement of the learner or forceps is measured by, for example, attaching a pressure sensor that applies finger pressure to the finger insertion unit at the proximal end to measure the amount of force during movement of the finger.
- a pressure sensor that applies finger pressure to the finger insertion unit at the proximal end to measure the amount of force during movement of the finger.
- the practice video and the still image of the last frame of the segmented teaching material video reproduced immediately before are displayed on the monitor 13 together.
- the present invention is not limited to this.
- An image, in this example, the tip of the forceps image 1141 may be displayed as a marker in a point-wise manner only at the position (orientation as required) where the tip should finally reach in the section. Or it is good also as a guide of only an audio
- the display of the final frame video may be started from the time when the monitor 13 is switched to the endoscope camera 115 side, or may be a method that starts in association with a predetermined condition, for example, the progress of practice.
- a predetermined condition for example, the progress of practice.
- the simultaneous display timing may be set in a predetermined range wider than the determination in the first stage in which the check calculation is performed (step S57).
- the video to be displayed together may be a moving image in addition to a still video.
- a mode in which a video of about several seconds at the end of the divided teaching material video is repeatedly played back may be used.
- the measurement and evaluation processes in the evaluation unit 216 are performed in the first and second stages. However, only one of the first and second stages may be performed. Moreover, although the comparison process of each parameter and the norm comparison process have been described, these evaluations are as follows. That is, in the state where the position error vector ⁇ P of the measurement target part between the teaching material video and the actual video is aligned in the same direction as the motion vector velocity vector v, v ⁇ P / T It is desirable to switch at an appropriate timing. When this condition is greatly deviated, the learning promotion effect by the illusion of motor connection is greatly impaired.
- the illusion effect can be further enhanced by applying the switching cycle and phase pattern for generating a sense of fusion shown in the prior application (Japanese Patent Application No. 2012-97328) to the timing of switching to the segmented teaching material video.
- the position error vector ⁇ P is small to some extent at the time of connection, that is, when the position approaches, the blinking cycle, that is, the switching cycle may be switched to the fusion pattern.
- the fusion pattern is preferably a frequency of 2 Hz to 4 Hz and a self-other ratio of 1: 1 to 1: 3.
- the motion learning support apparatus imitates the motion of each segmented teaching material video of the teaching material video related to motion and the segmented teaching material video captured by the imaging unit, which is segmented in the time direction.
- motion detection means for detecting the learner's movement, and evaluating the learner's movement with respect to the motion of the section teaching material video
- the video is switched in a timely manner so that the movement of the learner's video is linked from the practice video to the next segmented teaching material video. Therefore, the learner can have the illusion as if he started the movement of the next segment with the same movement as the model person, and can remember the movement as a clearer and more detailed action image. Further improvement can be achieved.
- the first learning support processing means also writes the final video in the segmented teaching material video reproduced immediately before the image of the imaging unit is displayed on the monitor. It is preferable to display. According to this configuration, the teaching material video at the final stage is displayed, so that the tracking position can be easily recognized.
- the first learning support processing means displays the image of the imaging unit and the final video in the immediately preceding divided teaching material video by a visual field synthesis method. .
- the trace position is displayed in accordance with the learner's viewpoint, when the position and timing match, the illusion as described above is induced, and the movement is stored as a clearer and more detailed action image. The learning effect can be further improved.
- the first learning support processing means displays the final video in the immediately preceding divided teaching material video in a display form different from the image of the imaging unit. According to this configuration, it becomes possible for the learner to easily identify his / her own live video and segmented teaching material video.
- the first learning support processing means displays the final video in the immediately preceding divided teaching material video in a blinking display. According to this configuration, it becomes possible for the learner to easily identify his / her own live video and segmented teaching material video.
- the first learning support processing means displays an end-stage video in the immediately preceding segmented teaching material video from a predetermined time during display of the practice image. According to this configuration, since the end position of the end of the division is indicated from the beginning of learning, the operation is stabilized and the learning effect is enhanced as compared with the case where display is started from the middle.
- the final video in the immediately preceding segmented teaching material video is a still image. According to this configuration, since the divided teaching material video is stationary, it becomes easy for the learner to distinguish it from his / her own live video.
- the monitor further includes second learning support processing means for displaying the teaching material video before being divided and the captured video of the imaging unit in a time-sharing manner. It is preferable. According to this configuration, exercise inductivity is exhibited and learning efficiency can be improved.
- the motion learning support method includes a segmented teaching material video of the teaching material video related to motion, which is segmented in the time direction, and a learner's practice image imitating the motion of each segmented teaching material image captured by the imaging unit.
- the learner's movements with respect to the movements of the segmented teaching material images are evaluated by detecting the learner's movements. It is preferable to switch the monitor from the image of the imaging unit to the next segmented teaching material video.
- the video is switched in a timely manner so that the movement of the learner's video is linked from the practice video to the next segmented teaching material video. Therefore, the learner can have the illusion as if he started the movement of the next segment with the same movement as the model person, and can remember the movement as a clearer and more detailed action image. Further improvement can be achieved.
- the motion learning support method it is preferable to display the final video in the segmented teaching material video reproduced immediately before, while displaying the image of the imaging unit on the monitor.
- the teaching material love video in the final stage is displayed, so that the tracking position can be easily recognized.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Business, Economics & Management (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Medicinal Chemistry (AREA)
- Medical Informatics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Algebra (AREA)
- General Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Pulmonology (AREA)
- Image Analysis (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
Description
10 練習台装置
114 鉗子
115 内視鏡カメラ(撮像部)
116 動き検出部(動き検出手段)
121 操作部
13 モニタ
20 情報処理装置
21 制御部
211 手順学習映像作成部(動き要素映像作成手段)
212 手順学習映像表示処理部(第1の学習支援処理手段)
213 追従学習映像表示処理部(第2の学習支援処理手段)
215 動き情報取得部(動き検出手段)
216 評価部(評価手段)
232 手順学習教材記憶部
Claims (10)
- 時間方向に区分された、動きに関する教材映像の各区分教材映像と、撮像部で撮影される、各区分教材映像の動きを真似る学習者の練習画像とを区分毎に交互にモニタに表示する動き学習支援装置において、
前記学習者の動きを検出する動き検出手段と、
前記区分教材映像の動きに対する前記学習者の動きの類似さを評価する評価手段と、
前記評価手段により類似するとの評価が得られた場合、前記モニタを前記撮像部の画像から次の区分教材映像に切り替える第1の学習支援処理手段とを備えたことを特徴とする動き学習支援装置。 - 前記第1の学習支援処理手段は、前記撮像部の画像の前記モニタへの表示中に、直前に再生した区分教材映像中の終盤の映像を併記表示する請求項1に記載の動き学習支援装置。
- 前記第1の学習支援処理手段は、前記撮像部の画像と前記直前の区分教材映像中の終盤の映像とを視野合成法で表示する請求項2に記載の動き学習支援装置。
- 前記第1の学習支援処理手段は、前記直前の区分教材映像中の終盤の映像を前記撮像部の画像とは異なる表示形態で表示する請求項2又は3に記載の動き学習支援装置。
- 前記第1の学習支援処理手段は、前記直前の区分教材映像中の終盤の映像を点滅表示で表示する請求項4に記載の動き学習支援装置。
- 前記第1の学習支援処理手段は、前記直前の区分教材映像中の終盤の映像を前記練習画像の表示中の所定時間から表示する請求項2~5のいずれかに記載の動き学習支援装置。
- 前記直前の区分教材映像中の終盤の映像は静止画である請求項2~6のいずれかに記載の動き学習支援装置。
- 前記モニタに、区分けされる前の前記教材映像と前記撮像部の撮像映像とを時分割合成して表示する第2の学習支援処理手段とを備えたことを特徴とする請求項1~7のいずれかに記載の動き学習支援装置。
- 時間方向に区分された、動きに関する教材映像の各区分教材映像と、撮像部で撮影される、各区分教材映像の動きを真似る学習者の練習画像とを区分毎に交互にモニタに表示する動き学習支援方法において、
前記学習者の動きを検出することによって前記区分教材映像の動きに対する前記学習者の動きの類似さを評価し、
類似するとの評価が得られた場合、前記モニタを前記撮像部の画像から次の区分教材映像に切り替える動き学習支援方法。 - 前記撮像部の画像の前記モニタへの表示中に、直前に再生した区分教材映像中の終盤の映像を併記表示する請求項9に記載の動き学習支援方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/104,385 US10360814B2 (en) | 2013-12-26 | 2013-12-26 | Motion learning support apparatus |
JP2015554412A JP6162259B2 (ja) | 2013-12-26 | 2013-12-26 | 動き学習支援装置及び動き学習支援方法 |
EP13900488.1A EP3089139A4 (en) | 2013-12-26 | 2013-12-26 | Movement learning support device and movement learning support method |
PCT/JP2013/084957 WO2015097825A1 (ja) | 2013-12-26 | 2013-12-26 | 動き学習支援装置及び動き学習支援方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/084957 WO2015097825A1 (ja) | 2013-12-26 | 2013-12-26 | 動き学習支援装置及び動き学習支援方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015097825A1 true WO2015097825A1 (ja) | 2015-07-02 |
Family
ID=53477761
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/084957 WO2015097825A1 (ja) | 2013-12-26 | 2013-12-26 | 動き学習支援装置及び動き学習支援方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10360814B2 (ja) |
EP (1) | EP3089139A4 (ja) |
JP (1) | JP6162259B2 (ja) |
WO (1) | WO2015097825A1 (ja) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018019993A (ja) * | 2016-08-05 | 2018-02-08 | 国立大学法人千葉大学 | 画像作成装置、画像作成システム、画像作成方法およびダミー器具 |
WO2019008737A1 (ja) * | 2017-07-07 | 2019-01-10 | オリンパス株式会社 | 内視鏡用トレーニングシステム |
JP2019016952A (ja) * | 2017-07-07 | 2019-01-31 | 富士ゼロックス株式会社 | 情報処理装置及びプログラム |
JP2020506436A (ja) * | 2017-02-14 | 2020-02-27 | アプライド メディカル リソーシーズ コーポレイション | 腹腔鏡訓練システム |
JP2020134710A (ja) * | 2019-02-20 | 2020-08-31 | 国立大学法人大阪大学 | 手術トレーニング装置 |
JP2020139998A (ja) * | 2019-02-27 | 2020-09-03 | 公立大学法人埼玉県立大学 | 手指操作支援装置及び支援方法 |
JP2020144233A (ja) * | 2019-03-06 | 2020-09-10 | 株式会社日立製作所 | 学習支援システム、学習支援装置及びプログラム |
JP2021157216A (ja) * | 2020-03-25 | 2021-10-07 | 富士工業株式会社 | 料理教室システムおよび方法 |
JP7082384B1 (ja) | 2021-05-28 | 2022-06-08 | 浩平 田仲 | 学習システム及び学習方法 |
WO2022202860A1 (ja) * | 2021-03-23 | 2022-09-29 | 国立大学法人 東京大学 | 情報処理システム、情報処理方法及びプログラム |
JP7486860B1 (ja) | 2023-08-04 | 2024-05-20 | 株式会社計数技研 | 映像合成装置、映像合成方法、及びプログラム |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6359887B2 (ja) * | 2014-06-19 | 2018-07-18 | 株式会社京都科学 | 縫合手技評価装置、縫合手記評価装置用プログラム、及び、縫合シミュレータシステム |
JP7228829B2 (ja) * | 2018-09-20 | 2023-02-27 | 学校法人立命館 | 聴診トレーニングシステムおよび聴診トレーニングプログラム |
WO2023180182A1 (en) * | 2022-03-22 | 2023-09-28 | Koninklijke Philips N.V. | Systems and methods for exhaustion detection using networked tools |
USD1036498S1 (en) * | 2022-07-25 | 2024-07-23 | Digital Surgery Limited | Portion of a display screen with a graphical user interface |
USD1036499S1 (en) * | 2022-07-25 | 2024-07-23 | Digital Surgery Limited | Portion of a display screen with a graphical user interface |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08182786A (ja) | 1994-12-27 | 1996-07-16 | Shinkiyou Denshi Kk | 運動体の画像解析装置 |
JPH10274918A (ja) * | 1997-03-31 | 1998-10-13 | Uinetsuto:Kk | 身体動作練習システムおよび身体動作練習用教材 |
JPH10309335A (ja) * | 1997-05-12 | 1998-11-24 | Tomotaka Marui | 運動動作の画像記録・再生システム |
JP2000504854A (ja) | 1996-02-13 | 2000-04-18 | マサチューセッツ・インスティテュート・オブ・テクノロジー | 仮想環境における人間の動作軌道学習装置 |
JP2007323268A (ja) * | 2006-05-31 | 2007-12-13 | Oki Electric Ind Co Ltd | 映像提供装置 |
JP2012097328A (ja) | 2010-11-02 | 2012-05-24 | Fuji Electric Co Ltd | 薄膜製造方法および装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5984684A (en) * | 1996-12-02 | 1999-11-16 | Brostedt; Per-Arne | Method and system for teaching physical skills |
US5904484A (en) * | 1996-12-23 | 1999-05-18 | Burns; Dave | Interactive motion training device and method |
US6514081B1 (en) * | 1999-08-06 | 2003-02-04 | Jeffrey L. Mengoli | Method and apparatus for automating motion analysis |
US20030054327A1 (en) * | 2001-09-20 | 2003-03-20 | Evensen Mark H. | Repetitive motion feedback system and method of practicing a repetitive motion |
US8113844B2 (en) * | 2006-12-15 | 2012-02-14 | Atellis, Inc. | Method, system, and computer-readable recording medium for synchronous multi-media recording and playback with end user control of time, data, and event visualization for playback control over a network |
EP2628143A4 (en) * | 2010-10-11 | 2015-04-22 | Teachscape Inc | METHOD AND SYSTEMS FOR RECORDING, PROCESSING, MANAGING AND / OR EVALUATING MULTIMEDIA CONTENT OF OBSERVED PERSONS IN EXECUTING A TASK |
EP2636034A4 (en) | 2010-11-04 | 2015-07-22 | Univ Johns Hopkins | SYSTEM AND METHOD FOR ASSESSING OR ENHANCING CAPACITIES IN NON-INVASIVE SURGERY |
WO2012106706A2 (en) * | 2011-02-04 | 2012-08-09 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Hybrid physical-virtual reality simulation for clinical training capable of providing feedback to a physical anatomic model |
TW201314639A (zh) * | 2011-09-21 | 2013-04-01 | Ind Tech Res Inst | 運動學習系統與輔助使用者學習運動之方法 |
US20140308640A1 (en) * | 2013-07-08 | 2014-10-16 | George Edward Forman | Method to Improve Skilled Motion Using Concurrent Video of Master and Student Performance |
-
2013
- 2013-12-26 US US15/104,385 patent/US10360814B2/en active Active
- 2013-12-26 WO PCT/JP2013/084957 patent/WO2015097825A1/ja active Application Filing
- 2013-12-26 EP EP13900488.1A patent/EP3089139A4/en not_active Ceased
- 2013-12-26 JP JP2015554412A patent/JP6162259B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08182786A (ja) | 1994-12-27 | 1996-07-16 | Shinkiyou Denshi Kk | 運動体の画像解析装置 |
JP2000504854A (ja) | 1996-02-13 | 2000-04-18 | マサチューセッツ・インスティテュート・オブ・テクノロジー | 仮想環境における人間の動作軌道学習装置 |
JPH10274918A (ja) * | 1997-03-31 | 1998-10-13 | Uinetsuto:Kk | 身体動作練習システムおよび身体動作練習用教材 |
JPH10309335A (ja) * | 1997-05-12 | 1998-11-24 | Tomotaka Marui | 運動動作の画像記録・再生システム |
JP2007323268A (ja) * | 2006-05-31 | 2007-12-13 | Oki Electric Ind Co Ltd | 映像提供装置 |
JP2012097328A (ja) | 2010-11-02 | 2012-05-24 | Fuji Electric Co Ltd | 薄膜製造方法および装置 |
Non-Patent Citations (6)
Title |
---|
DAISUKE KONDO; HIROYUKI IIZUKA; HIDEYUKI ANDO; KAZUTAKA OBAMA; YOSHIHARU SAKAI; TARO MAEDA: "Learning support using self-other motion overlapping for laparoscopy training", NO.13-2 PROCEEDINGS OF THE 2013 JSME CONFERENCE ON ROBOTICS AND MECHATRONICS, 22 May 2013 (2013-05-22) |
DAN MIKAMI ET AL.: "A Video Feedback System Providing Motions Synchronized with Reference Examples for Motor Learning", KENKYU HOKOKU CONSUMER.DEVICE & SYSTEM (CDS, vol. 8, no. 2, 5 September 2013 (2013-09-05), XP055292704, Retrieved from the Internet <URL:https://ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&itemid=95137&item_no=1&attribute_id=1&fileno=1> [retrieved on 20130320] * |
KENTA MATSUYOSHI ET AL.: "Kaigo . Kango Gakushu ni Okeru Doga Hikaku Kyozai o Mochiita Gakushu Shien System no Kochiku (O", DAI 2 KAI FORUM ON DATA ENGINEERING AND INFORMATION MANAGEMENT- DEIM 2010 -RONBUNSHU, 25 May 2010 (2010-05-25), XP055292705, Retrieved from the Internet <URL:http://db-event.jpn.org/deim2010/proceedings/files/F8-1.pdf> [retrieved on 20130320] * |
KOJI IKUTA; JUNYA FUKUYAMA; AKIRA NAKANISHI; KOJI HOTTA: "Study on portable virtual endoscope system with force sensation-Development of the record and reproduce system of insertion skill and the fixed- quantity rating method of the insertion skill", PROCEEDINGS OF THE 03' JAPAN SOCIETY OF MECHANICAL ENGINEERS CONFERENCE ON ROBOTICS AND MECHATRONICS, vol. 1P1-2F-D, no. 03-4 |
SAORI OTA ET AL.: "DESIGN AND DEVELOPMENT OF A LEARNING SUPPORT ENVIRONMENT FOR APPLE PEELING USING DATA GLOVES", IEICE TECHNICAL REPORT, vol. 111, no. 473, 3 March 2012 (2012-03-03), pages 155 - 160, XP055292701 * |
See also references of EP3089139A4 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018019993A (ja) * | 2016-08-05 | 2018-02-08 | 国立大学法人千葉大学 | 画像作成装置、画像作成システム、画像作成方法およびダミー器具 |
JP2020506436A (ja) * | 2017-02-14 | 2020-02-27 | アプライド メディカル リソーシーズ コーポレイション | 腹腔鏡訓練システム |
JP7235665B2 (ja) | 2017-02-14 | 2023-03-08 | アプライド メディカル リソーシーズ コーポレイション | 腹腔鏡訓練システム |
JP7005970B2 (ja) | 2017-07-07 | 2022-01-24 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
WO2019008737A1 (ja) * | 2017-07-07 | 2019-01-10 | オリンパス株式会社 | 内視鏡用トレーニングシステム |
JP2019016952A (ja) * | 2017-07-07 | 2019-01-31 | 富士ゼロックス株式会社 | 情報処理装置及びプログラム |
JP2020134710A (ja) * | 2019-02-20 | 2020-08-31 | 国立大学法人大阪大学 | 手術トレーニング装置 |
JP7201998B2 (ja) | 2019-02-20 | 2023-01-11 | 国立大学法人大阪大学 | 手術トレーニング装置 |
JP2020139998A (ja) * | 2019-02-27 | 2020-09-03 | 公立大学法人埼玉県立大学 | 手指操作支援装置及び支援方法 |
WO2020179128A1 (ja) * | 2019-03-06 | 2020-09-10 | 株式会社日立製作所 | 学習支援システム、学習支援装置及びプログラム |
JP7116696B2 (ja) | 2019-03-06 | 2022-08-10 | 株式会社日立製作所 | 学習支援システム及びプログラム |
JP2020144233A (ja) * | 2019-03-06 | 2020-09-10 | 株式会社日立製作所 | 学習支援システム、学習支援装置及びプログラム |
JP7402516B2 (ja) | 2020-03-25 | 2023-12-21 | 富士工業株式会社 | 料理教室システムおよび方法 |
JP2021157216A (ja) * | 2020-03-25 | 2021-10-07 | 富士工業株式会社 | 料理教室システムおよび方法 |
WO2022202860A1 (ja) * | 2021-03-23 | 2022-09-29 | 国立大学法人 東京大学 | 情報処理システム、情報処理方法及びプログラム |
JP2022147538A (ja) * | 2021-03-23 | 2022-10-06 | 国立大学法人 東京大学 | 情報処理システム、情報処理方法及びプログラム |
JP7345866B2 (ja) | 2021-03-23 | 2023-09-19 | 国立大学法人 東京大学 | 情報処理システム、情報処理方法及びプログラム |
JP2023164901A (ja) * | 2021-03-23 | 2023-11-14 | 国立大学法人 東京大学 | 情報処理システム、情報処理方法及びプログラム |
JP7417337B2 (ja) | 2021-03-23 | 2024-01-18 | 国立大学法人 東京大学 | 情報処理システム、情報処理方法及びプログラム |
JP2022182673A (ja) * | 2021-05-28 | 2022-12-08 | 浩平 田仲 | 学習システム及び学習方法 |
JP7082384B1 (ja) | 2021-05-28 | 2022-06-08 | 浩平 田仲 | 学習システム及び学習方法 |
JP7486860B1 (ja) | 2023-08-04 | 2024-05-20 | 株式会社計数技研 | 映像合成装置、映像合成方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
US10360814B2 (en) | 2019-07-23 |
US20160314713A1 (en) | 2016-10-27 |
EP3089139A4 (en) | 2017-06-14 |
JPWO2015097825A1 (ja) | 2017-03-23 |
JP6162259B2 (ja) | 2017-07-12 |
EP3089139A1 (en) | 2016-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6162259B2 (ja) | 動き学習支援装置及び動き学習支援方法 | |
WO2020179128A1 (ja) | 学習支援システム、学習支援装置及びプログラム | |
US20100167248A1 (en) | Tracking and training system for medical procedures | |
JP4436966B2 (ja) | 内視鏡のチュートリアルシステム | |
US9396669B2 (en) | Surgical procedure capture, modelling, and editing interactive playback | |
US8314840B1 (en) | Motion analysis using smart model animations | |
CA2807614C (en) | Endoscope simulator | |
KR101816172B1 (ko) | 의료 훈련 시뮬레이션 시스템 및 방법 | |
Breslin et al. | Modelling relative motion to facilitate intra-limb coordination | |
JP2005525598A (ja) | 手術トレーニングシミュレータ | |
Stanley et al. | Design of body-grounded tactile actuators for playback of human physical contact | |
CN108376487A (zh) | 基于虚拟现实中的肢体训练系统及方法 | |
JP6014450B2 (ja) | 動き学習支援装置 | |
JP6521511B2 (ja) | 手術トレーニング装置 | |
CN110038270A (zh) | 一种上肢单臂康复训练机器人人机交互系统及方法 | |
CN1064854C (zh) | 用于指导学生的系统 | |
Gonzalez-Romo et al. | Quantification of motion during microvascular anastomosis simulation using machine learning hand detection | |
JP4585997B2 (ja) | 手術用トレーニング装置 | |
CN112534491A (zh) | 一种医学模拟器、医学训练系统和方法 | |
CN112419826B (zh) | 虚拟仿真腹腔镜手术内窥镜操作训练方法及系统 | |
Lacey et al. | Mixed-reality simulation of minimally invasive surgeries | |
JP2008242331A (ja) | 看護シミュレータ | |
JP7201998B2 (ja) | 手術トレーニング装置 | |
JP3614278B2 (ja) | 眼球訓練装置及び方法並びに記録媒体 | |
Kavathekar et al. | Towards quantifying surgical suturing skill with force, motion and image sensor data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13900488 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015554412 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2013900488 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013900488 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15104385 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |