WO2021162207A1 - Système de réalité virtuelle et procédé permettant de rééduquer des patients souffrant d'exotropie sur la base d'une intelligence artificielle, et support lisible par ordinateur - Google Patents

Système de réalité virtuelle et procédé permettant de rééduquer des patients souffrant d'exotropie sur la base d'une intelligence artificielle, et support lisible par ordinateur Download PDF

Info

Publication number
WO2021162207A1
WO2021162207A1 PCT/KR2020/016128 KR2020016128W WO2021162207A1 WO 2021162207 A1 WO2021162207 A1 WO 2021162207A1 KR 2020016128 W KR2020016128 W KR 2020016128W WO 2021162207 A1 WO2021162207 A1 WO 2021162207A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual reality
hmd module
gaze
exotropia
Prior art date
Application number
PCT/KR2020/016128
Other languages
English (en)
Korean (ko)
Inventor
오석희
양희경
황정민
황보택근
김제현
Original Assignee
가천대학교 산학협력단
서울대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가천대학교 산학협력단, 서울대학교산학협력단 filed Critical 가천대학교 산학협력단
Publication of WO2021162207A1 publication Critical patent/WO2021162207A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/08Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
    • A61B3/085Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus for testing strabismus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Definitions

  • the present invention relates to a virtual reality system, method, and computer-readable medium for rehabilitation training for exotropia patients based on artificial intelligence.
  • Artificial intelligence-based exotropia rehabilitation virtual reality system, method, and computer- which can improve boredom for repetitive training and provide strabismus diagnosis information for exotropia patients who performed virtual reality contents by collecting training progress data It relates to a readable medium.
  • Virtual reality is a technology that enables interaction between a user and a three-dimensional virtual space created by a computer system. It is a convergence technology that provides a sense of reality as if it were felt and actually existed in the space. In the global virtual reality market, head mounted displays occupy most of the market, and various types of virtual reality devices are being distributed recently. The influence of platforms that distribute virtual reality contents and contents is expected to expand.
  • Vision therapy also called vision training, is a clinical approach that corrects eye movement disorders, abnormalities of binocular function such as amblyopia, and control disorders such as strabismus and improves related symptoms. This includes various methods of improving visual function through visual training in a non-surgical way. Existing patients with amblyopia and strabismus have been undergoing rehabilitation training such as traditional eye collection training for treatment.
  • the present invention improves the boredom of repetitive training of exotropia patients by providing virtual reality contents for eye collection training to exotropia patients, and collects training progress data to perform virtual reality contents for strabismus patients.
  • An object of the present invention is to provide a virtual reality system, method and computer-readable medium for rehabilitation training for exotropia patients based on artificial intelligence capable of providing diagnostic information.
  • the virtual reality content can be executed in the HMD module, and the virtual reality content is virtual according to the user's controller operation.
  • a triggering step of moving the first object in a preset area of the central part of the real screen from the starting position to the user's side A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head; a release step of emitting the first object from a virtual reality screen according to a user's controller manipulation; and a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen; including a HMD module and a service server Provides a rehabilitation training system for exotropia patients.
  • the one or more second objects may be set to have respective preset sizes, and the coordinates of the second objects may have respective distances from the user's coordinates.
  • the HMD module collects the coordinate information of the user's gaze and the pupil position information on the virtual reality screen, and the triggering step includes: A release level of the first object is derived based on the coordinate information of the gaze and the user's pupil position information, and the release step includes the first object in the virtual reality screen based on the release level derived in the trigger step. You can determine the firing intensity or firing distance of
  • the HMD module collects the user's pupil position information on the virtual reality screen, and the triggering step is, when the user's pupil position is out of a preset range, moves to the user's side
  • the position of the first object may be reset to the start position.
  • the HMD module collects the coordinate information of the user's gaze on the virtual reality screen, and the releasing step is performed on the coordinate information of the user's left eye gaze and the right eye gaze coordinate information.
  • a release area may be derived based on a preset criterion, and the first object may be launched within a range of the derived release area.
  • the virtual reality content generates a gaze heat map based on coordinate information of the user's gaze in the trigger step, targeting step, and release step, and uses the generated gaze heat map to the service server
  • the strabismus diagnosis step of transmitting the strabismus diagnosis step to; further comprising, the service server may derive the strabismus diagnosis information for the gaze heat map received by the diagnosis model learned from the learning gaze heat map data.
  • a rehabilitation training method for an exotropia patient using a rehabilitation training system for an exotropia patient including an HMD module and a service server by the HMD module, the central portion of the virtual reality screen according to the user's controller operation a trigger step of moving the first object in the set area from the start position to the user side;
  • a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen; provide training methods.
  • an embodiment of the present invention is a computer-readable medium for implementing a method for rehabilitation of an exotropia patient using an exotropia rehabilitation system including an HMD module and a service server, wherein the The computer-readable medium stores instructions for causing the components of the exotropia rehabilitation system to perform the following steps, wherein the steps are: by the HMD module, the center of the virtual reality screen according to the user's controller operation a trigger step of moving a first object in a preset area of a part from a start position to a user side; A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head by the HMD module; a release step of emitting the first object from the virtual reality screen according to the user's controller manipulation by the HMD module; and a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen by the HMD module; Provide
  • the difficulty of the game is adjusted by varying the release level of the first object implemented in the game according to the degree of the user's eye collection, thereby providing an effect of providing customized virtual reality content.
  • the user can check his/her eye consolidation level through a virtual reality screen on which information related to the coordinate information of his/her gaze is displayed, and perform a game and perform eye converging training can perform
  • the present invention by analyzing the content performance information of the user who has executed the virtual reality content based on artificial intelligence, it is possible to exhibit the effect of deriving strabismus diagnosis information on the degree of strabismus.
  • the strabismus diagnosis information of the user who executed the virtual reality content can be provided to the user, the user's guardian or a specialist, and the strabismus diagnosis information can be used as the user's treatment, treatment and consultation data.
  • FIG. 1 schematically shows the overall form of a rehabilitation training system according to an embodiment of the present invention.
  • FIG. 2 schematically shows a state in which a user using the rehabilitation training system according to an embodiment of the present invention wears an HMD module.
  • FIG. 3 schematically illustrates a step of performing virtual reality content according to an embodiment of the present invention.
  • FIG. 4 schematically shows a display screen in the HMD module provided in the trigger stage, the targeting stage and the release stage, and the score calculation stage according to an embodiment of the present invention.
  • FIG 5 schematically shows a display screen in the HMD module provided in the trigger stage according to an embodiment of the present invention.
  • FIG. 6 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • FIG. 7 schematically shows a display screen in the HMD module provided in the targeting step, the release step, and the score calculation step according to an embodiment of the present invention.
  • FIG. 7 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • FIG 8 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • FIG. 9 schematically shows an internal configuration of a service server according to an embodiment of the present invention.
  • FIG. 10 schematically illustrates a gaze heat map generated based on coordinate information of a user's gaze according to an embodiment of the present invention.
  • FIG 11 schematically shows the execution steps of the HMD module and the service server according to an embodiment of the present invention.
  • FIG 12 schematically shows the operation of the diagnostic model learning unit of the service server according to an embodiment of the present invention.
  • FIG. 13 exemplarily shows a computing device according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another.
  • a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
  • a "part” includes a unit realized by hardware, a unit realized by software, and a unit realized using both.
  • one unit may be implemented using two or more hardware, and two or more units may be implemented by one hardware.
  • ' ⁇ unit' is not limited to software or hardware, and ' ⁇ unit' may be configured to be in an addressable storage medium or may be configured to reproduce one or more processors.
  • ' ⁇ ' denotes components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, and procedures. , subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables.
  • components and ' ⁇ units' may be combined into a smaller number of components and ' ⁇ units' or further separated into additional components and ' ⁇ units'.
  • components and ' ⁇ units' may be implemented to play one or more CPUs in a device or secure multimedia card.
  • the "user terminal” referred to below may be implemented as a computer or portable terminal that can access a server or other terminal through a network.
  • the computer includes, for example, a laptop, a desktop, and a laptop equipped with a web browser (WEB Browser), and the portable terminal is, for example, a wireless communication device that ensures portability and mobility.
  • WEB Browser web browser
  • network refers to a wired network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN), or a mobile radio communication network or satellite. It may be implemented as any kind of wireless network, such as a communication network.
  • FIG. 1 schematically shows the overall form of a rehabilitation training system according to an embodiment of the present invention.
  • the rehabilitation training system includes an HMD module 1000 and a service server 2000 .
  • the service server 2000 and the HMD module 1000 correspond to a computing device including one or more processors and one or more memories, and a user can perform an eye collection training game provided through the rehabilitation training system of the present invention. And, through such performance in a virtual space, it is possible to maximize the training continuity of exotropia patients by alleviating symptoms and generating interest through game elements.
  • the service server 2000 may analyze the patient's training data collected through the HMD module 1000 through the learned diagnosis model and transmit the analyzed strabismus diagnosis information to the outside.
  • the HMD module 1000 and the service server 2000 may communicate through a network.
  • the HMD module 1000 may receive a user's input and operation through a controller, and the received user's input and operation are transmitted to the HMD module 1000 and reflected in the virtual reality content.
  • the HMD module 1000 includes a display unit 1100 , a speaker unit 1200 , an eye tracker unit 1300 , a content execution unit 1400 , and a strabismus diagnosis unit 1500 .
  • the display unit 1100 displays a display screen provided to a user wearing the HMD module 1000 .
  • Rehabilitation training virtual reality system of the present invention including the trigger step (S1000), the targeting step (S1100), the release step (S1200) and the score calculation step (S1300) according to the user's controller operation, the first object (O1) Provides virtual reality content to be launched to any one of one or more second objects O2, and the display unit 1100 displays a virtual reality screen displayed in providing such virtual reality content.
  • the speaker unit 1200 may provide sound information to a user by outputting sound information of the virtual reality content.
  • the eye tracker unit 1300 may detect the eye movement of the user wearing the HMD module 1000, and collects coordinate information of the user's gaze and pupil position information from the sensed eye movement.
  • the present invention is a rehabilitation training system for exotropia patients, and implements a content that fires a first object (O1) to any one of one or more second objects (O2), and the eye tracker unit 1300 is a virtual reality content It detects the user's eye movement and collects the coordinate information of the user's gaze and the pupil position information while the virtual reality content is running. Thereafter, the collected information related to the coordinate information of the gaze and the pupil position information may be utilized to derive the user's strabismus diagnosis information.
  • the content execution unit 1400 may execute virtual reality content through the HMD module 1000, receive user input and motion through a controller in the provided virtual reality screen, and execute a game for eye collection training. let it be In one embodiment of the present invention, the content execution unit 1400 virtual reality including a trigger step (S1000), a targeting step, a release step (S1200), a score calculation step (S1300), and a strabismus diagnosis step (S1400) Run the content.
  • S1000 trigger step
  • S1200 a targeting step
  • S1200 release step
  • S1300 a score calculation step
  • S1400 strabismus diagnosis step
  • the strabismus diagnosis unit 1500 generates a gaze heat map based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100), and the release step (S1200) of the virtual reality content.
  • the generated gaze heat map is transmitted to the service server 2000 .
  • the service server 2000 receives the gaze heat map generated by the HMD module 1000 and derives strabismus diagnosis information for the received gaze heat map by the learned diagnosis model.
  • the service server 2000 includes a strabismus diagnosis information derivation unit 2100 and a diagnosis model learning unit 2200 .
  • the rehabilitation training system including the HMD module 1000 and the service server 2000 shown in FIG. 1 may further include elements other than the illustrated components, but for convenience, the rehabilitation training system according to embodiments of the present invention and Only relevant components are shown.
  • FIG. 2 schematically shows a state in which a user using the rehabilitation training system according to an embodiment of the present invention wears the HMD module 1000 .
  • a virtual reality device that provides virtual reality content for rehabilitation to a user may include an HMD module 1000 and a controller, and the user may use the HMD module 1000 as shown in FIG. 2 . Wearing and holding a controller, the user's motion and input can be received by the HMD module 1000 through the controller, and image information and sound information in virtual reality can be provided to the user through the HMD module 1000 have.
  • FIG. 3 schematically shows the execution steps of the content execution unit 1400 according to an embodiment of the present invention
  • FIG. 4 is a trigger step (S1000), a targeting step (S1100) and a release according to an embodiment of the present invention.
  • a display screen in the HMD module 1000 provided in the step S1200 and the score calculation step S1300 is schematically shown.
  • the present invention is an exotropia patient rehabilitation system including an HMD module 1000 and a service server 2000.
  • the virtual reality content can be executed by the content execution unit 1400, and the virtual The real content includes: a trigger step of moving the first object O1 in a preset area of the central portion of the virtual reality screen from the start position to the user side according to the user's controller manipulation (S1000); A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module 1000 according to the movement of the user's head (S1100); A release step (S1200) of firing the first object (O1) on the virtual reality screen according to the user's controller manipulation; A score calculation step (S1300) of calculating a score by determining whether the first object (O1) fired in the release step (S1200) is in contact with one or more second objects (O2) existing in the virtual reality screen (S1300); And generating a gaze heat map based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting
  • the first object O1 in the preset area of the central part of the virtual reality screen is moved from the start position to the user's side according to the user's manipulation of the controller.
  • a first object O1 and one or more second objects O2 are displayed.
  • the one or more second objects O2 are set to have respective preset sizes, and the coordinates of the second objects O2 have respective distances from the user's coordinates.
  • the second objects O2 have respective distances from the user's coordinates, and are displayed in a form having a predetermined size and shape, respectively.
  • the user provided with such a virtual reality screen operates the controller, and according to the user's controller manipulation, the first object O1 in the preset area of the central part of the virtual reality screen is moved from the preset start position to the user's side.
  • the coordinates of the first object O1 are moved in such a way that the distance from the coordinates of the user becomes closer and closer to the user side.
  • the first object O1 in the preset area of the central part of the virtual reality screen is moved from the start position to the user's side according to the user's manipulation of the controller.
  • the user can naturally perform eye collection training by focusing on the moving first object O1.
  • the virtual reality screen moves according to the direction manipulation of the HMD module 1000 according to the movement of the user's head.
  • the user looks at the second object O2, which is the target to fire the first object O1, and the user's head According to the direction manipulation of the HMD module 1000 according to the movement, the virtual reality screen moves as shown in (d) of FIG. 4 .
  • the first object O1 is launched from the virtual reality screen according to the user's manipulation of the controller.
  • the user performs a controller operation for firing the first object O1 on the virtual reality screen moved in the targeting steps (S1100) (S1300), as shown in (e) of FIG. 4, in the virtual reality screen
  • the first object O1 is launched, and the launched first object O1 comes into contact with the target second object O2.
  • the score is calculated by determining whether the first object (O1) fired in the release step (S1200) comes into contact with one or more second objects (O2) existing in the virtual reality screen. .
  • the first object O1 released by the user is fired and contacted by the target second object O2, and the scoring step (S1300) The score is calculated by determining whether the fired first object O1 is in contact with the second object O2.
  • a gaze heat map is generated based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100) and the release step (S1200), and the generated gaze heat map is transmitted to the service server 2000 .
  • the eye tracker unit 1300 of the HMD module 1000 collects the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100) and the release step (S1200) in real time,
  • the strabismus diagnosis unit 1500 of the HMD module 1000 generates a gaze heat map based on the coordinate information of the user's gaze collected from the eye tracker unit 1300 .
  • the user's gaze in the virtual reality screen is virtual based on the user's gaze information that changes according to the movement of the first object O1 as the user performs the virtual reality content, or according to the movement of the user's head. It is information that visually displays information about the length of time spent on the real screen.
  • a gaze heat map may be generated, and the generated gaze heat map may be transmitted to the service server 2000 .
  • FIG 5 schematically shows a display screen in the HMD module 1000 provided in the trigger step S1000 according to an embodiment of the present invention.
  • FIG. 5 shows a display screen in the HMD module 1000 provided in the above-described trigger step (S1000).
  • the display screen as shown in FIG. 5 is displayed by the display unit 1100 of the HMD module 1000 .
  • the first object O1 is at the start position of the first object O1 in a preset area in the center of the virtual reality screen displayed in the HMD module 1000 . is displayed.
  • one or more second objects O2 set to have respective preset sizes are displayed, and the coordinates of one or more second objects O2 are displayed to have respective distances from the user's coordinates.
  • the trigger step (S1000) as shown in FIGS.
  • the position of the first object O1 is moved from the starting position to the user's position.
  • the one or more second objects O2 are not moved, and only the first objects O1 are moved toward the user according to the user's manipulation of the controller.
  • the HMD module 1000 collects the user's pupil position information on the virtual reality screen, and in the triggering step (S1000), when the user's pupil position is out of a preset range, it moves to the user side.
  • the position of the first object O1 is reset to the starting position.
  • the eye tracker unit 1300 of the HMD module 1000 collects the user's pupil position information in the trigger step S1000, and the trigger step ( In S1000), based on the collected pupil position information, it is determined whether the pupil position of the user meets a preset criterion.
  • the angle of deviation is set
  • the preset criteria such as when the time out of the standard is greater than or equal to the preset time, etc.
  • the position of the first object O1 moved toward the user reset to the starting position.
  • the pupil position of the user provided with the screen as shown in (c) of FIG. 5 does not meet the preset criteria, the position of the first object O1 is reset to the starting position and the virtual reality screen provided to the user is shown in FIG. 5
  • a screen as shown in (a) of FIG. 5 may be provided again instead of the screen as shown in (d) of FIG.
  • the release level of the first object O1 is derived based on the coordinate information of the user's gaze and the user's pupil position information while the first object O1 is moving.
  • the eye tracker unit 1300 of the HMD module 1000 collects the coordinate information of the gaze and the pupil position information in the trigger step S1000, and the content execution unit 1400 of the HMD module 1000, the trigger step
  • the release level of the first object O1 may be derived according to a preset criterion based on the coordinate information of the user's gaze in S1000 and the pupil position information.
  • the virtual reality content executed in this way moves the first object O1 displayed on the virtual reality screen from a preset start position to the user side, so that the user can focus while looking at the first object O1. It can exert the effect that patients with disabilities can do eye collection training with more interest, and while the user experiences the virtual reality content, the coordinate information of the user's gaze and the pupil position information are collected and reflected in the virtual reality content By doing so, it is possible to exert the effect of improving the user's immersion level.
  • FIG. 6 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • the main purpose of the rehabilitation training system of the present invention is to assist in training the eyes of a user with an exotropia disorder in which the angle of deviation according to eye movement, such as exotropia or intermittent exotropia, is shifted to the outside.
  • the coordinate information of the gaze collected by the eye tracker of the HMD module 1000 may be displayed differently according to each angle of view of the exotropia patients.
  • FIG. 6 shows a display screen in the HMD module 1000 in which the release area derived in the targeting step (S1100) is different according to each different collected pupil position information.
  • a release area can be derived according to a preset criterion based on the Right point of (a) of (a)).
  • 6 (b) shows that the coordinate information of the gaze on the virtual reality screen is displayed differently according to the deviation angle of the right eye.
  • the user's right eye looking at the virtual reality screen has an outward oblique angle, and therefore, coordinate information of the right eye's gaze in the virtual reality screen (Right point in FIG. 6 (a)) It is shown that is displayed at a farther distance from the first object O1 than the coordinate information of the left eye's gaze (Left point in FIG. 6(a)).
  • 6(c) also shows that the coordinate information of the gaze in the virtual reality screen is displayed differently according to the distorted perspective angle of the left eye, as opposed to FIG. 6(b).
  • the coordinate information of the gaze of the left eye and the gaze of the right eye may be displayed differently on the virtual reality screen, and the release area is derived according to a preset criterion based on the coordinate information of the gaze of the left eye and the coordinate information of the gaze of the right eye.
  • the release area derived in FIGS. 6 (b) and 6 (c) has a wider range than the release area in FIG. 6 (a).
  • the first object O1 is emitted within the range of the release area derived according to the user's manipulation of the controller.
  • the first object 01 may be randomly launched within the range of the release area. Therefore, the user focuses his/her gaze on the first object O1 and aligns the two eyes symmetrically so that the coordinate information of the gaze on the first object O1 does not deviate from the area of the first object O1.
  • the release area may be set within a range that does not deviate from the area of the first object O1.
  • the first object O1 is randomly fired within the range of the release area, and the smaller the range of the release area is from the area with the first object O1, the more accurately it is.
  • the first object O1 may be launched toward the second object O2.
  • the release area may be displayed for a preset time according to a preset criterion based on an average value of the user's pupil position while the first object O1 moves in the triggering step S1000.
  • the release area when the user's pupil position is symmetrically aligned while the first object moves in the trigger step (S1000) and is maintained without deviation of the oblique angle, the release area is maintained for a long time.
  • the release region may be displayed for a shorter time according to a preset criterion.
  • the first object O1 in the trigger step ( S1000 ) may be displayed for a preset time according to a preset criterion based on an average value of coordinate information of the user's gaze while moving.
  • FIG. 7 schematically shows a display screen in the HMD module 1000 provided in the targeting step (S1100), the release step (S1200), and the score calculation step (S1300) according to an embodiment of the present invention.
  • a command for an operation to be performed by the user may be displayed on the virtual reality screen as shown in FIG. 5D .
  • a voice for an operation to be performed may be output and transmitted through the speaker unit 1200 of the HMD module 1000 .
  • the user who has received the command for the operation to be performed on the virtual reality screen, looks around to perform the command, and in the targeting step (S1100), the direction manipulation in the HMD module 1000 is performed according to the movement of the user's head, and , the virtual reality screen is moved as shown in (a) of FIG. 7 according to the direction manipulation of the HMD module 1000 .
  • a release step (S1200) of emitting the first object O1 from the virtual reality screen according to the user's controller operation is performed.
  • the user manipulates the controller (eg, pushes a button) to fire the first object O1, and according to the user's input, the first object O1 is As shown, the second object O2 is emitted in the displayed direction so that the first object O1 and the second object O2 come into contact with each other.
  • the firing intensity or firing distance of the first object O1 in the virtual reality screen is determined.
  • the release level of the first object O1 is derived based on the coordinate information of the user's gaze and the user's pupil position information while the first object O1 is moving.
  • a release level is derived according to a preset criterion
  • the first object in the virtual reality screen is based on the release level derived in the trigger step (S1000).
  • the firing intensity or firing distance may be derived based on a release level derived based on a preset criterion as shown in Table 1 below.
  • release level firing range launch century One 10m approximately 2 15m medicine 3 20m middle 4 25m Zhonggang 5 30m River
  • a release level may be derived according to a preset criterion for deriving a level.
  • the pupil position of the user using the virtual reality content such as when the movement information of the coordinate information of the user's gaze coincides with the movement information of the first object O1
  • the release level is shown in Table 1 above. It can be derived with a large value of the release level. As described above, as the line of sight according to the movement of the first object O1 is maintained, the firing intensity and firing distance from which a better score can be derived can be determined.
  • the content execution unit 1400 of the HMD module 1000 performs a score calculation step (S1300) of calculating a score by determining whether the first object O1 and the second object O2 are in contact.
  • a score calculation step S1300 of calculating a score by determining whether the first object O1 and the second object O2 are in contact.
  • the content execution unit 1400 of the HMD module 1000 is , a contact is determined to calculate a score, and the calculated score may be reflected in real time while the virtual reality content is being executed and displayed on the virtual reality screen as shown in (c) of FIG. 7 .
  • the virtual reality content collects the coordinate information of the user's gaze and the pupil position information while the user experiences the virtual reality content and reflects it in the virtual reality content, thereby improving the user's immersion.
  • FIG 8 schematically shows a display screen in the HMD module 1000 provided in the release step S1200 according to an embodiment of the present invention.
  • FIGS. 8 (a) and 8 (b) show a display screen of the HMD module 1000 in which a score calculated by performing the score calculation step S1300 is displayed.
  • the first object O1 of FIGS. 8(a) and 8(b) is the second object O1 that is targeted according to the user's manipulation of the controller. It is shown being fired and coming into contact with the object O2. Referring to each of these screens, it is shown that each of the first objects O1 is launched and touched the same second object O2, but each score is calculated and displayed differently. As such, even if the second object O2 from which the first object O1 is launched is the same, the score calculated by the score calculation step S1300 may be different.
  • the score calculated in the score calculation step S1300 is set differently according to a preset criterion for the area in contact with the first object O1.
  • FIG. 8(c) shows the second object O2 displayed in FIGS. 8(a) and (b).
  • the second object O2 displayed on the virtual reality screen is calculated according to a preset criterion for the area in which the first object O1 is in contact in the score calculation step (S1300).
  • the calculated score may be set differently.
  • a score for each area set according to the preset criteria is applied to the second object O2. Display.
  • a score may be assigned based on other preset criteria according to settings in virtual reality content.
  • the user may be provided with a virtual reality screen in which a score given differently for each area of the second object O2 is displayed by gathering more eyes and concentrating his/her eyes in order to obtain a higher score, and based on the displayed score information By concentrating on the target area of the second object O2 and obtaining a high score, it is possible to exert the effect of improving the immersion of the virtual reality content.
  • FIG. 9 schematically shows an internal configuration of a service server 2000 according to an embodiment of the present invention.
  • the service server 2000 of the present invention receives the gaze heat map generated by the strabismus diagnosis unit 1500 of the HMD module 1000, and the gaze heat map received by the diagnosis model learned from the learning gaze heat map data.
  • strabismus diagnosis information for As shown in FIG. 10 , the service server 2000 includes a strabismus diagnosis information extracting unit 2100 and a diagnosis model learning unit 2200 .
  • the strabismus diagnosis information derivation unit 2100 derives strabismus diagnosis information for the gaze heat map received from the HMD module 1000 through a diagnosis model using machine learning. After receiving the gaze heat map, the strabismus diagnosis information derivation unit 2100 automatically performs a diagnosis using the diagnostic model, and derives strabismus diagnosis information for the gaze heat map.
  • the diagnosis model learning unit 2200 may learn a diagnosis model for deriving strabismus diagnosis information using the learning gaze heat map data.
  • the strabismus diagnosis information for the gaze heat map received by the diagnostic model learned based on the learning gaze heat map data is derived.
  • the service server 2000 in FIG. 9 may further include elements other than the illustrated elements, but only elements related to the rehabilitation training system according to embodiments of the present invention are displayed for convenience.
  • FIG. 10 schematically illustrates a gaze heat map generated based on coordinate information of a user's gaze according to an embodiment of the present invention.
  • the strabismus diagnosis unit 1500 of the HMD module 1000 of the present invention is based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100), and the release step (S1200).
  • a strabismus diagnosis step (S1400) of generating a gaze heat map and transmitting the generated gaze heat map to the service server 2000; is performed.
  • the strabismus diagnosis unit 1500 of the HMD module 1000 generates a gaze heat map based on coordinate information of the user's gaze, and such a gaze heat map is based on the coordinate information of the user's gaze, as shown in FIG. Based on the information about the length of time the user's gaze stayed on the virtual reality screen is information displayed as an image.
  • Such a gaze heat map when comparing the viewpoint of the exotropia patient with the coordinates of the first object (O1) and the second object (O2), it is possible to grasp information about where the patient stayed and how long he stayed there have.
  • Such a gaze heat map may be displayed as two-dimensional image information as shown in FIG. 10 .
  • FIG 11 schematically shows the steps of performing the HMD module 1000 and the service server 2000 according to an embodiment of the present invention.
  • Rehabilitation training system including the HMD module 1000 and the service server 2000 of the present invention, generating a gaze heat map based on the coordinate information of the user's gaze (S200); transmitting the generated heat map to the service server 2000 (S210); deriving strabismus diagnosis information for the gaze heat map received by the diagnostic model learned based on the learning gaze heat map data (S220); Transmitting the derived strabismus diagnosis information (S230) is performed.
  • step S200 the strabismus diagnosis unit 1500 of the HMD module 1000 performs a trigger step (S1000), a targeting step (S1100), and a release step (S1200) by the content execution unit 1400.
  • a gaze heat map is generated based on the coordinate information of the user's gaze during the period.
  • the gaze heat map is information that displays information about the length of time the user's gaze stays on the virtual reality screen as an image based on the coordinate information of the user's gaze.
  • step S210 the strabismus diagnosis unit 1500 of the HMD module 1000 transmits the generated gaze heat map to the service server 2000 .
  • the service server 2000 derives strabismus diagnosis information for the gaze heat map received by the diagnostic model learned from the learning gaze heat map data.
  • the diagnosis model is learned by gaze heat map data of a plurality of exotropia patients who have performed virtual reality contents in the past to derive strabismus diagnosis information for the gaze heat map received from the HMD module 1000 .
  • the strabismus diagnosis information may include information on the user's strabismus, information on the angle of strabismus, and the frequency of occurrence of exotropia.
  • the diagnostic model can analyze the gaze heat map using artificial neural network technology that includes temporal concepts such as RNN, LSTM, and GRU, and the diagnostic model includes one or more deep learning-based trained artificial neural network modules. can do.
  • the service server 2000 transmits the derived strabismus diagnosis information.
  • the derived strabismus diagnosis information may be transmitted to the HMD module 1000 to display the strabismus diagnosis information in the HMD module 1000, or may be transmitted to a user's terminal or a user's guardian or a specialist's terminal to be utilized for exotropia diagnosis. .
  • the HMD module 1000 generates a gaze heat map based on the coordinate information and pupil position information of the user's gaze and transmits it to the service server 2000, and the service server 2000 transmits the gaze heat map.
  • the service server 2000 transmits the gaze heat map.
  • FIG 12 schematically illustrates the operation of the diagnostic model learning unit 2200 of the service server 2000 according to an embodiment of the present invention.
  • the service server 2000 of the present invention receives the gaze heat map generated by the strabismus diagnosis unit 1500 of the HMD module 1000, and the gaze heat map received by the diagnosis model learned from the learning gaze heat map data.
  • strabismus diagnosis information for As shown in (a) of FIG. 12 the service server 2000 includes a diagnosis model learning unit 2200, and the diagnosis model learning unit 2200 is configured to learn a diagnosis model based on the learning gaze heat map data.
  • the learning gaze heat map data for learning the diagnostic model may be gaze heat map data of a plurality of exotropia patients using the rehabilitation training system of the present invention as shown in FIG. 12( b ).
  • a plurality of exotropia patients with different strabismus information including the strabismus developed eye and strabismus angle in the past performed virtual reality content in the past, and a plurality of gaze heatmaps are used as learning gaze heatmap data to learn the diagnostic model.
  • the learning gaze heat map data is gaze heat map data of a plurality of exotropia patients using the past rehabilitation system stored in the service server 2000 .
  • a diagnosis result for the gaze heat map received from the HMD module 1000 may be received, and the gaze heat map including the diagnosis result may be utilized as learning gaze heat map data for learning the diagnostic model.
  • FIG. 13 exemplarily shows an internal configuration of a computing device according to an embodiment of the present invention.
  • the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( I/O subsystem) 11400 , a power circuit 11500 , and a communication circuit 11600 may be included at least.
  • the computing device 11000 may correspond to the service server 2000 or the HMD module 1000 .
  • the memory 11200 may include, for example, a high-speed random access memory, a magnetic disk, an SRAM, a DRAM, a ROM, a flash memory, or a non-volatile memory. have.
  • the memory 11200 may include a software module required for the operation of the computing device 11000 , an instruction set, or other various data included in the learned embedding model.
  • access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100 .
  • Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 .
  • the processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
  • the input/output subsystem 11400 may couple various input/output peripherals to the peripheral interface 11300 .
  • the input/output subsystem 11400 may include a controller for coupling a peripheral device such as a monitor, keyboard, mouse, printer, or a touch screen or sensor as required to the peripheral interface 11300 .
  • input/output peripherals may be coupled to peripheral interface 11300 without going through input/output subsystem 11400 .
  • the power circuit 11500 may supply power to all or some of the components of the terminal.
  • the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
  • the communication circuit 11600 may enable communication with another computing device using at least one external port.
  • the communication circuit 11600 may include an RF circuit to transmit and receive an RF signal, also known as an electromagnetic signal, to enable communication with other computing devices.
  • an RF signal also known as an electromagnetic signal
  • FIG. 13 is only an example of the computing device 11000 , and the computing device 11000 may omit some components shown in FIG. 13 , or further include additional components not shown in FIG. 13 , or 2 It may have a configuration or arrangement that combines two or more components.
  • a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 13 , and may include various communication methods (WiFi, 3G, LTE) in the communication circuit 1160 . , Bluetooth, NFC, Zigbee, etc.) may include a circuit for RF communication.
  • Components that may be included in the computing device 11000 may be implemented in hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
  • Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium.
  • the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal.
  • the application to which the present invention is applied may be installed in the user terminal through a file provided by the file distribution system.
  • the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request of the user terminal.
  • the device described above may be implemented as a hardware component, a software component, and/or a combination of the hardware component and the software component.
  • devices and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA). , a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers.
  • the processing device may execute an operating system (OS) and one or more software applications running on the operating system.
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • OS operating system
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that can include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
  • Software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device.
  • the software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave.
  • the software may be distributed over networked computing devices, and stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
  • - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Engineering & Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Epidemiology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Pain & Pain Management (AREA)
  • Rehabilitation Therapy (AREA)
  • Vascular Medicine (AREA)
  • User Interface Of Digital Computer (AREA)
  • Rehabilitation Tools (AREA)

Abstract

La présente invention concerne un système de réalité virtuelle et un procédé pour rééduquer des patients souffrant d'exotropie sur la base d'une intelligence artificielle, et un support lisible par ordinateur, et, plus précisément, un système de réalité virtuelle et un procédé permettant de rééduquer des patients souffrant d'exotropie sur la base d'une intelligence artificielle, et un support lisible par ordinateur, le système et le procédé fournissant un contenu de réalité virtuelle qui permet à des patients souffrant d'exotropie d'entraîner la convergence de l'œil, de manière à atténuer l'inconfort d'un entraînement répétitif pour les patients souffrant d'exotropie, et à collecter des données de progression d'apprentissage, de façon à fournir des informations de diagnostic de strabisme concernant des patients souffrant d'exotropie qui ont été entraînés à l'aide du contenu de réalité virtuelle.
PCT/KR2020/016128 2020-02-11 2020-11-17 Système de réalité virtuelle et procédé permettant de rééduquer des patients souffrant d'exotropie sur la base d'une intelligence artificielle, et support lisible par ordinateur WO2021162207A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0016552 2020-02-11
KR1020200016552A KR102120112B1 (ko) 2020-02-11 2020-02-11 인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체

Publications (1)

Publication Number Publication Date
WO2021162207A1 true WO2021162207A1 (fr) 2021-08-19

Family

ID=71081987

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/016128 WO2021162207A1 (fr) 2020-02-11 2020-11-17 Système de réalité virtuelle et procédé permettant de rééduquer des patients souffrant d'exotropie sur la base d'une intelligence artificielle, et support lisible par ordinateur

Country Status (2)

Country Link
KR (1) KR102120112B1 (fr)
WO (1) WO2021162207A1 (fr)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102120112B1 (ko) * 2020-02-11 2020-06-09 가천대학교 산학협력단 인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체
KR102406472B1 (ko) * 2020-07-31 2022-06-07 전남대학교산학협력단 가상현실 기반 사시 진단 교육용 시뮬레이션 시스템
CN112641610B (zh) * 2020-12-21 2023-04-07 韩晓光 弱视的训练方法、装置及系统
KR102563365B1 (ko) * 2021-05-11 2023-08-03 고려대학교 산학협력단 일상생활 속 사시 발현 모니터링 시스템 및 방법
KR102549616B1 (ko) 2021-09-03 2023-06-29 재단법인 아산사회복지재단 가상현실 기반 안구운동 및 시지각 훈련 제공 장치 및 방법
KR102436681B1 (ko) 2022-05-16 2022-08-26 주식회사 엠디에이 스마트 미러를 이용한 신경계 질환 보조 진단 시스템
KR102460828B1 (ko) 2022-05-16 2022-11-01 주식회사 엠디에이 스마트 미러를 이용한 운동 재활 시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101276097B1 (ko) * 2012-11-16 2013-06-18 (주) 피디케이리미티드 발달장애 치료용 다중 시뮬레이터 및 시뮬레이션 방법
KR101966164B1 (ko) * 2017-01-12 2019-04-05 고려대학교산학협력단 가상현실을 이용한 안과 검사 시스템 및 방법
KR20190058169A (ko) * 2017-11-21 2019-05-29 대한민국(국립재활원장) 가상 현실 기반 치료 시스템 및 방법
KR20190062023A (ko) * 2017-11-28 2019-06-05 전남대학교산학협력단 사시 진단 시스템 및 방법, 시선 영상 획득 시스템, 컴퓨터 프로그램
KR102120112B1 (ko) * 2020-02-11 2020-06-09 가천대학교 산학협력단 인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101650706B1 (ko) * 2014-10-07 2016-09-05 주식회사 자원메디칼 웨어러블 디스플레이장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101276097B1 (ko) * 2012-11-16 2013-06-18 (주) 피디케이리미티드 발달장애 치료용 다중 시뮬레이터 및 시뮬레이션 방법
KR101966164B1 (ko) * 2017-01-12 2019-04-05 고려대학교산학협력단 가상현실을 이용한 안과 검사 시스템 및 방법
KR20190058169A (ko) * 2017-11-21 2019-05-29 대한민국(국립재활원장) 가상 현실 기반 치료 시스템 및 방법
KR20190062023A (ko) * 2017-11-28 2019-06-05 전남대학교산학협력단 사시 진단 시스템 및 방법, 시선 영상 획득 시스템, 컴퓨터 프로그램
KR102120112B1 (ko) * 2020-02-11 2020-06-09 가천대학교 산학협력단 인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체

Also Published As

Publication number Publication date
KR102120112B1 (ko) 2020-06-09

Similar Documents

Publication Publication Date Title
WO2021162207A1 (fr) Système de réalité virtuelle et procédé permettant de rééduquer des patients souffrant d'exotropie sur la base d'une intelligence artificielle, et support lisible par ordinateur
WO2018080149A2 (fr) Système de rééducation cognitive à réalité virtuelle associé à la biométrique
WO2018124809A1 (fr) Terminal vestimentaire et son procédé de commande
WO2017126910A1 (fr) Unité d'affichage et dispositif électronique comprenant ladite unité d'affichage
WO2017119788A1 (fr) Visiocasque électronique
WO2013055024A1 (fr) Appareil pour entraîner la capacité de reconnaissance à l'aide d'un robot et procédé associé
WO2018074837A1 (fr) Serveur de récompenses pour défis et procédé d'exploitation associé
WO2013133583A1 (fr) Système et procédé de réhabilitation cognitive par interaction tangible
WO2020050636A1 (fr) Procédé et appareil de reconnaissance de gestes basée sur l'intention de l'utilisateur
WO2016010368A1 (fr) Dispositif de commande portatif, et procédé d'authentification et d'appairage associé
WO2015008935A1 (fr) Système de simulation et de formation à la réanimation cardiopulmonaire (cpr), et procédé de commande correspondant
KR101936082B1 (ko) 리얼센스 카메라를 이용한 가상현실 기반의 손가락 재활 시스템 및 방법
WO2017115887A1 (fr) Dispositif permettant de fournir un jeu de reconnaissance de mouvement, procédé associé, et support d'enregistrement lisible par ordinateur sur lequel ledit procédé est enregistré
US10936060B2 (en) System and method for using gaze control to control electronic switches and machinery
WO2018143509A1 (fr) Robot mobile et son procédé de commande
WO2020054954A1 (fr) Procédé et système pour fournir une rétroaction virtuelle en temps réel
WO2020242087A1 (fr) Dispositif électronique et procédé de correction de données biométriques sur la base de la distance entre le dispositif électronique et l'utilisateur, mesurée à l'aide d'au moins un capteur
Chen et al. Effect of temporality, physical activity and cognitive load on spatiotemporal vibrotactile pattern recognition
WO2024053989A1 (fr) Système et procédé de recommandation d'exercice de rééducation sur la base d'une détection d'environnement de vie à l'aide d'une reconnaissance d'image numérique
WO2017090815A1 (fr) Appareil et procédé de mesure de l'amplitude de mouvement articulaire
WO2021221490A1 (fr) Système et procédé de compréhension fiable d'interrogations d'images basée sur des caractéristiques contextuelles
WO2020085745A1 (fr) Système de gestion de données médicales et procédé associé
WO2020235730A1 (fr) Procédé de prédiction de performance d'apprentissage basé sur un motif de balayage d'un agent d'apprentissage dans un environnement d'apprentissage vidéo
WO2023101180A1 (fr) Système et procédé de traitement de rééducation basé sur la réalité mixte
WO2023033545A1 (fr) Dispositif, procédé et programme de recherche et d'entraînement d'un locus rétinien préféré d'un patient présentant des dommages du champ visuel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919109

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20919109

Country of ref document: EP

Kind code of ref document: A1