WO2021162207A1 - Virtual-reality system and method for rehabilitating exotropia patients on basis of artificial intelligence, and computer-readable medium - Google Patents

Virtual-reality system and method for rehabilitating exotropia patients on basis of artificial intelligence, and computer-readable medium Download PDF

Info

Publication number
WO2021162207A1
WO2021162207A1 PCT/KR2020/016128 KR2020016128W WO2021162207A1 WO 2021162207 A1 WO2021162207 A1 WO 2021162207A1 KR 2020016128 W KR2020016128 W KR 2020016128W WO 2021162207 A1 WO2021162207 A1 WO 2021162207A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual reality
hmd module
gaze
exotropia
Prior art date
Application number
PCT/KR2020/016128
Other languages
French (fr)
Korean (ko)
Inventor
오석희
양희경
황정민
황보택근
김제현
Original Assignee
가천대학교 산학협력단
서울대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가천대학교 산학협력단, 서울대학교산학협력단 filed Critical 가천대학교 산학협력단
Publication of WO2021162207A1 publication Critical patent/WO2021162207A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/08Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
    • A61B3/085Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus for testing strabismus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Definitions

  • the present invention relates to a virtual reality system, method, and computer-readable medium for rehabilitation training for exotropia patients based on artificial intelligence.
  • Artificial intelligence-based exotropia rehabilitation virtual reality system, method, and computer- which can improve boredom for repetitive training and provide strabismus diagnosis information for exotropia patients who performed virtual reality contents by collecting training progress data It relates to a readable medium.
  • Virtual reality is a technology that enables interaction between a user and a three-dimensional virtual space created by a computer system. It is a convergence technology that provides a sense of reality as if it were felt and actually existed in the space. In the global virtual reality market, head mounted displays occupy most of the market, and various types of virtual reality devices are being distributed recently. The influence of platforms that distribute virtual reality contents and contents is expected to expand.
  • Vision therapy also called vision training, is a clinical approach that corrects eye movement disorders, abnormalities of binocular function such as amblyopia, and control disorders such as strabismus and improves related symptoms. This includes various methods of improving visual function through visual training in a non-surgical way. Existing patients with amblyopia and strabismus have been undergoing rehabilitation training such as traditional eye collection training for treatment.
  • the present invention improves the boredom of repetitive training of exotropia patients by providing virtual reality contents for eye collection training to exotropia patients, and collects training progress data to perform virtual reality contents for strabismus patients.
  • An object of the present invention is to provide a virtual reality system, method and computer-readable medium for rehabilitation training for exotropia patients based on artificial intelligence capable of providing diagnostic information.
  • the virtual reality content can be executed in the HMD module, and the virtual reality content is virtual according to the user's controller operation.
  • a triggering step of moving the first object in a preset area of the central part of the real screen from the starting position to the user's side A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head; a release step of emitting the first object from a virtual reality screen according to a user's controller manipulation; and a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen; including a HMD module and a service server Provides a rehabilitation training system for exotropia patients.
  • the one or more second objects may be set to have respective preset sizes, and the coordinates of the second objects may have respective distances from the user's coordinates.
  • the HMD module collects the coordinate information of the user's gaze and the pupil position information on the virtual reality screen, and the triggering step includes: A release level of the first object is derived based on the coordinate information of the gaze and the user's pupil position information, and the release step includes the first object in the virtual reality screen based on the release level derived in the trigger step. You can determine the firing intensity or firing distance of
  • the HMD module collects the user's pupil position information on the virtual reality screen, and the triggering step is, when the user's pupil position is out of a preset range, moves to the user's side
  • the position of the first object may be reset to the start position.
  • the HMD module collects the coordinate information of the user's gaze on the virtual reality screen, and the releasing step is performed on the coordinate information of the user's left eye gaze and the right eye gaze coordinate information.
  • a release area may be derived based on a preset criterion, and the first object may be launched within a range of the derived release area.
  • the virtual reality content generates a gaze heat map based on coordinate information of the user's gaze in the trigger step, targeting step, and release step, and uses the generated gaze heat map to the service server
  • the strabismus diagnosis step of transmitting the strabismus diagnosis step to; further comprising, the service server may derive the strabismus diagnosis information for the gaze heat map received by the diagnosis model learned from the learning gaze heat map data.
  • a rehabilitation training method for an exotropia patient using a rehabilitation training system for an exotropia patient including an HMD module and a service server by the HMD module, the central portion of the virtual reality screen according to the user's controller operation a trigger step of moving the first object in the set area from the start position to the user side;
  • a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen; provide training methods.
  • an embodiment of the present invention is a computer-readable medium for implementing a method for rehabilitation of an exotropia patient using an exotropia rehabilitation system including an HMD module and a service server, wherein the The computer-readable medium stores instructions for causing the components of the exotropia rehabilitation system to perform the following steps, wherein the steps are: by the HMD module, the center of the virtual reality screen according to the user's controller operation a trigger step of moving a first object in a preset area of a part from a start position to a user side; A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head by the HMD module; a release step of emitting the first object from the virtual reality screen according to the user's controller manipulation by the HMD module; and a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen by the HMD module; Provide
  • the difficulty of the game is adjusted by varying the release level of the first object implemented in the game according to the degree of the user's eye collection, thereby providing an effect of providing customized virtual reality content.
  • the user can check his/her eye consolidation level through a virtual reality screen on which information related to the coordinate information of his/her gaze is displayed, and perform a game and perform eye converging training can perform
  • the present invention by analyzing the content performance information of the user who has executed the virtual reality content based on artificial intelligence, it is possible to exhibit the effect of deriving strabismus diagnosis information on the degree of strabismus.
  • the strabismus diagnosis information of the user who executed the virtual reality content can be provided to the user, the user's guardian or a specialist, and the strabismus diagnosis information can be used as the user's treatment, treatment and consultation data.
  • FIG. 1 schematically shows the overall form of a rehabilitation training system according to an embodiment of the present invention.
  • FIG. 2 schematically shows a state in which a user using the rehabilitation training system according to an embodiment of the present invention wears an HMD module.
  • FIG. 3 schematically illustrates a step of performing virtual reality content according to an embodiment of the present invention.
  • FIG. 4 schematically shows a display screen in the HMD module provided in the trigger stage, the targeting stage and the release stage, and the score calculation stage according to an embodiment of the present invention.
  • FIG 5 schematically shows a display screen in the HMD module provided in the trigger stage according to an embodiment of the present invention.
  • FIG. 6 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • FIG. 7 schematically shows a display screen in the HMD module provided in the targeting step, the release step, and the score calculation step according to an embodiment of the present invention.
  • FIG. 7 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • FIG 8 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • FIG. 9 schematically shows an internal configuration of a service server according to an embodiment of the present invention.
  • FIG. 10 schematically illustrates a gaze heat map generated based on coordinate information of a user's gaze according to an embodiment of the present invention.
  • FIG 11 schematically shows the execution steps of the HMD module and the service server according to an embodiment of the present invention.
  • FIG 12 schematically shows the operation of the diagnostic model learning unit of the service server according to an embodiment of the present invention.
  • FIG. 13 exemplarily shows a computing device according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another.
  • a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
  • a "part” includes a unit realized by hardware, a unit realized by software, and a unit realized using both.
  • one unit may be implemented using two or more hardware, and two or more units may be implemented by one hardware.
  • ' ⁇ unit' is not limited to software or hardware, and ' ⁇ unit' may be configured to be in an addressable storage medium or may be configured to reproduce one or more processors.
  • ' ⁇ ' denotes components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, and procedures. , subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables.
  • components and ' ⁇ units' may be combined into a smaller number of components and ' ⁇ units' or further separated into additional components and ' ⁇ units'.
  • components and ' ⁇ units' may be implemented to play one or more CPUs in a device or secure multimedia card.
  • the "user terminal” referred to below may be implemented as a computer or portable terminal that can access a server or other terminal through a network.
  • the computer includes, for example, a laptop, a desktop, and a laptop equipped with a web browser (WEB Browser), and the portable terminal is, for example, a wireless communication device that ensures portability and mobility.
  • WEB Browser web browser
  • network refers to a wired network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN), or a mobile radio communication network or satellite. It may be implemented as any kind of wireless network, such as a communication network.
  • FIG. 1 schematically shows the overall form of a rehabilitation training system according to an embodiment of the present invention.
  • the rehabilitation training system includes an HMD module 1000 and a service server 2000 .
  • the service server 2000 and the HMD module 1000 correspond to a computing device including one or more processors and one or more memories, and a user can perform an eye collection training game provided through the rehabilitation training system of the present invention. And, through such performance in a virtual space, it is possible to maximize the training continuity of exotropia patients by alleviating symptoms and generating interest through game elements.
  • the service server 2000 may analyze the patient's training data collected through the HMD module 1000 through the learned diagnosis model and transmit the analyzed strabismus diagnosis information to the outside.
  • the HMD module 1000 and the service server 2000 may communicate through a network.
  • the HMD module 1000 may receive a user's input and operation through a controller, and the received user's input and operation are transmitted to the HMD module 1000 and reflected in the virtual reality content.
  • the HMD module 1000 includes a display unit 1100 , a speaker unit 1200 , an eye tracker unit 1300 , a content execution unit 1400 , and a strabismus diagnosis unit 1500 .
  • the display unit 1100 displays a display screen provided to a user wearing the HMD module 1000 .
  • Rehabilitation training virtual reality system of the present invention including the trigger step (S1000), the targeting step (S1100), the release step (S1200) and the score calculation step (S1300) according to the user's controller operation, the first object (O1) Provides virtual reality content to be launched to any one of one or more second objects O2, and the display unit 1100 displays a virtual reality screen displayed in providing such virtual reality content.
  • the speaker unit 1200 may provide sound information to a user by outputting sound information of the virtual reality content.
  • the eye tracker unit 1300 may detect the eye movement of the user wearing the HMD module 1000, and collects coordinate information of the user's gaze and pupil position information from the sensed eye movement.
  • the present invention is a rehabilitation training system for exotropia patients, and implements a content that fires a first object (O1) to any one of one or more second objects (O2), and the eye tracker unit 1300 is a virtual reality content It detects the user's eye movement and collects the coordinate information of the user's gaze and the pupil position information while the virtual reality content is running. Thereafter, the collected information related to the coordinate information of the gaze and the pupil position information may be utilized to derive the user's strabismus diagnosis information.
  • the content execution unit 1400 may execute virtual reality content through the HMD module 1000, receive user input and motion through a controller in the provided virtual reality screen, and execute a game for eye collection training. let it be In one embodiment of the present invention, the content execution unit 1400 virtual reality including a trigger step (S1000), a targeting step, a release step (S1200), a score calculation step (S1300), and a strabismus diagnosis step (S1400) Run the content.
  • S1000 trigger step
  • S1200 a targeting step
  • S1200 release step
  • S1300 a score calculation step
  • S1400 strabismus diagnosis step
  • the strabismus diagnosis unit 1500 generates a gaze heat map based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100), and the release step (S1200) of the virtual reality content.
  • the generated gaze heat map is transmitted to the service server 2000 .
  • the service server 2000 receives the gaze heat map generated by the HMD module 1000 and derives strabismus diagnosis information for the received gaze heat map by the learned diagnosis model.
  • the service server 2000 includes a strabismus diagnosis information derivation unit 2100 and a diagnosis model learning unit 2200 .
  • the rehabilitation training system including the HMD module 1000 and the service server 2000 shown in FIG. 1 may further include elements other than the illustrated components, but for convenience, the rehabilitation training system according to embodiments of the present invention and Only relevant components are shown.
  • FIG. 2 schematically shows a state in which a user using the rehabilitation training system according to an embodiment of the present invention wears the HMD module 1000 .
  • a virtual reality device that provides virtual reality content for rehabilitation to a user may include an HMD module 1000 and a controller, and the user may use the HMD module 1000 as shown in FIG. 2 . Wearing and holding a controller, the user's motion and input can be received by the HMD module 1000 through the controller, and image information and sound information in virtual reality can be provided to the user through the HMD module 1000 have.
  • FIG. 3 schematically shows the execution steps of the content execution unit 1400 according to an embodiment of the present invention
  • FIG. 4 is a trigger step (S1000), a targeting step (S1100) and a release according to an embodiment of the present invention.
  • a display screen in the HMD module 1000 provided in the step S1200 and the score calculation step S1300 is schematically shown.
  • the present invention is an exotropia patient rehabilitation system including an HMD module 1000 and a service server 2000.
  • the virtual reality content can be executed by the content execution unit 1400, and the virtual The real content includes: a trigger step of moving the first object O1 in a preset area of the central portion of the virtual reality screen from the start position to the user side according to the user's controller manipulation (S1000); A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module 1000 according to the movement of the user's head (S1100); A release step (S1200) of firing the first object (O1) on the virtual reality screen according to the user's controller manipulation; A score calculation step (S1300) of calculating a score by determining whether the first object (O1) fired in the release step (S1200) is in contact with one or more second objects (O2) existing in the virtual reality screen (S1300); And generating a gaze heat map based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting
  • the first object O1 in the preset area of the central part of the virtual reality screen is moved from the start position to the user's side according to the user's manipulation of the controller.
  • a first object O1 and one or more second objects O2 are displayed.
  • the one or more second objects O2 are set to have respective preset sizes, and the coordinates of the second objects O2 have respective distances from the user's coordinates.
  • the second objects O2 have respective distances from the user's coordinates, and are displayed in a form having a predetermined size and shape, respectively.
  • the user provided with such a virtual reality screen operates the controller, and according to the user's controller manipulation, the first object O1 in the preset area of the central part of the virtual reality screen is moved from the preset start position to the user's side.
  • the coordinates of the first object O1 are moved in such a way that the distance from the coordinates of the user becomes closer and closer to the user side.
  • the first object O1 in the preset area of the central part of the virtual reality screen is moved from the start position to the user's side according to the user's manipulation of the controller.
  • the user can naturally perform eye collection training by focusing on the moving first object O1.
  • the virtual reality screen moves according to the direction manipulation of the HMD module 1000 according to the movement of the user's head.
  • the user looks at the second object O2, which is the target to fire the first object O1, and the user's head According to the direction manipulation of the HMD module 1000 according to the movement, the virtual reality screen moves as shown in (d) of FIG. 4 .
  • the first object O1 is launched from the virtual reality screen according to the user's manipulation of the controller.
  • the user performs a controller operation for firing the first object O1 on the virtual reality screen moved in the targeting steps (S1100) (S1300), as shown in (e) of FIG. 4, in the virtual reality screen
  • the first object O1 is launched, and the launched first object O1 comes into contact with the target second object O2.
  • the score is calculated by determining whether the first object (O1) fired in the release step (S1200) comes into contact with one or more second objects (O2) existing in the virtual reality screen. .
  • the first object O1 released by the user is fired and contacted by the target second object O2, and the scoring step (S1300) The score is calculated by determining whether the fired first object O1 is in contact with the second object O2.
  • a gaze heat map is generated based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100) and the release step (S1200), and the generated gaze heat map is transmitted to the service server 2000 .
  • the eye tracker unit 1300 of the HMD module 1000 collects the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100) and the release step (S1200) in real time,
  • the strabismus diagnosis unit 1500 of the HMD module 1000 generates a gaze heat map based on the coordinate information of the user's gaze collected from the eye tracker unit 1300 .
  • the user's gaze in the virtual reality screen is virtual based on the user's gaze information that changes according to the movement of the first object O1 as the user performs the virtual reality content, or according to the movement of the user's head. It is information that visually displays information about the length of time spent on the real screen.
  • a gaze heat map may be generated, and the generated gaze heat map may be transmitted to the service server 2000 .
  • FIG 5 schematically shows a display screen in the HMD module 1000 provided in the trigger step S1000 according to an embodiment of the present invention.
  • FIG. 5 shows a display screen in the HMD module 1000 provided in the above-described trigger step (S1000).
  • the display screen as shown in FIG. 5 is displayed by the display unit 1100 of the HMD module 1000 .
  • the first object O1 is at the start position of the first object O1 in a preset area in the center of the virtual reality screen displayed in the HMD module 1000 . is displayed.
  • one or more second objects O2 set to have respective preset sizes are displayed, and the coordinates of one or more second objects O2 are displayed to have respective distances from the user's coordinates.
  • the trigger step (S1000) as shown in FIGS.
  • the position of the first object O1 is moved from the starting position to the user's position.
  • the one or more second objects O2 are not moved, and only the first objects O1 are moved toward the user according to the user's manipulation of the controller.
  • the HMD module 1000 collects the user's pupil position information on the virtual reality screen, and in the triggering step (S1000), when the user's pupil position is out of a preset range, it moves to the user side.
  • the position of the first object O1 is reset to the starting position.
  • the eye tracker unit 1300 of the HMD module 1000 collects the user's pupil position information in the trigger step S1000, and the trigger step ( In S1000), based on the collected pupil position information, it is determined whether the pupil position of the user meets a preset criterion.
  • the angle of deviation is set
  • the preset criteria such as when the time out of the standard is greater than or equal to the preset time, etc.
  • the position of the first object O1 moved toward the user reset to the starting position.
  • the pupil position of the user provided with the screen as shown in (c) of FIG. 5 does not meet the preset criteria, the position of the first object O1 is reset to the starting position and the virtual reality screen provided to the user is shown in FIG. 5
  • a screen as shown in (a) of FIG. 5 may be provided again instead of the screen as shown in (d) of FIG.
  • the release level of the first object O1 is derived based on the coordinate information of the user's gaze and the user's pupil position information while the first object O1 is moving.
  • the eye tracker unit 1300 of the HMD module 1000 collects the coordinate information of the gaze and the pupil position information in the trigger step S1000, and the content execution unit 1400 of the HMD module 1000, the trigger step
  • the release level of the first object O1 may be derived according to a preset criterion based on the coordinate information of the user's gaze in S1000 and the pupil position information.
  • the virtual reality content executed in this way moves the first object O1 displayed on the virtual reality screen from a preset start position to the user side, so that the user can focus while looking at the first object O1. It can exert the effect that patients with disabilities can do eye collection training with more interest, and while the user experiences the virtual reality content, the coordinate information of the user's gaze and the pupil position information are collected and reflected in the virtual reality content By doing so, it is possible to exert the effect of improving the user's immersion level.
  • FIG. 6 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
  • the main purpose of the rehabilitation training system of the present invention is to assist in training the eyes of a user with an exotropia disorder in which the angle of deviation according to eye movement, such as exotropia or intermittent exotropia, is shifted to the outside.
  • the coordinate information of the gaze collected by the eye tracker of the HMD module 1000 may be displayed differently according to each angle of view of the exotropia patients.
  • FIG. 6 shows a display screen in the HMD module 1000 in which the release area derived in the targeting step (S1100) is different according to each different collected pupil position information.
  • a release area can be derived according to a preset criterion based on the Right point of (a) of (a)).
  • 6 (b) shows that the coordinate information of the gaze on the virtual reality screen is displayed differently according to the deviation angle of the right eye.
  • the user's right eye looking at the virtual reality screen has an outward oblique angle, and therefore, coordinate information of the right eye's gaze in the virtual reality screen (Right point in FIG. 6 (a)) It is shown that is displayed at a farther distance from the first object O1 than the coordinate information of the left eye's gaze (Left point in FIG. 6(a)).
  • 6(c) also shows that the coordinate information of the gaze in the virtual reality screen is displayed differently according to the distorted perspective angle of the left eye, as opposed to FIG. 6(b).
  • the coordinate information of the gaze of the left eye and the gaze of the right eye may be displayed differently on the virtual reality screen, and the release area is derived according to a preset criterion based on the coordinate information of the gaze of the left eye and the coordinate information of the gaze of the right eye.
  • the release area derived in FIGS. 6 (b) and 6 (c) has a wider range than the release area in FIG. 6 (a).
  • the first object O1 is emitted within the range of the release area derived according to the user's manipulation of the controller.
  • the first object 01 may be randomly launched within the range of the release area. Therefore, the user focuses his/her gaze on the first object O1 and aligns the two eyes symmetrically so that the coordinate information of the gaze on the first object O1 does not deviate from the area of the first object O1.
  • the release area may be set within a range that does not deviate from the area of the first object O1.
  • the first object O1 is randomly fired within the range of the release area, and the smaller the range of the release area is from the area with the first object O1, the more accurately it is.
  • the first object O1 may be launched toward the second object O2.
  • the release area may be displayed for a preset time according to a preset criterion based on an average value of the user's pupil position while the first object O1 moves in the triggering step S1000.
  • the release area when the user's pupil position is symmetrically aligned while the first object moves in the trigger step (S1000) and is maintained without deviation of the oblique angle, the release area is maintained for a long time.
  • the release region may be displayed for a shorter time according to a preset criterion.
  • the first object O1 in the trigger step ( S1000 ) may be displayed for a preset time according to a preset criterion based on an average value of coordinate information of the user's gaze while moving.
  • FIG. 7 schematically shows a display screen in the HMD module 1000 provided in the targeting step (S1100), the release step (S1200), and the score calculation step (S1300) according to an embodiment of the present invention.
  • a command for an operation to be performed by the user may be displayed on the virtual reality screen as shown in FIG. 5D .
  • a voice for an operation to be performed may be output and transmitted through the speaker unit 1200 of the HMD module 1000 .
  • the user who has received the command for the operation to be performed on the virtual reality screen, looks around to perform the command, and in the targeting step (S1100), the direction manipulation in the HMD module 1000 is performed according to the movement of the user's head, and , the virtual reality screen is moved as shown in (a) of FIG. 7 according to the direction manipulation of the HMD module 1000 .
  • a release step (S1200) of emitting the first object O1 from the virtual reality screen according to the user's controller operation is performed.
  • the user manipulates the controller (eg, pushes a button) to fire the first object O1, and according to the user's input, the first object O1 is As shown, the second object O2 is emitted in the displayed direction so that the first object O1 and the second object O2 come into contact with each other.
  • the firing intensity or firing distance of the first object O1 in the virtual reality screen is determined.
  • the release level of the first object O1 is derived based on the coordinate information of the user's gaze and the user's pupil position information while the first object O1 is moving.
  • a release level is derived according to a preset criterion
  • the first object in the virtual reality screen is based on the release level derived in the trigger step (S1000).
  • the firing intensity or firing distance may be derived based on a release level derived based on a preset criterion as shown in Table 1 below.
  • release level firing range launch century One 10m approximately 2 15m medicine 3 20m middle 4 25m Zhonggang 5 30m River
  • a release level may be derived according to a preset criterion for deriving a level.
  • the pupil position of the user using the virtual reality content such as when the movement information of the coordinate information of the user's gaze coincides with the movement information of the first object O1
  • the release level is shown in Table 1 above. It can be derived with a large value of the release level. As described above, as the line of sight according to the movement of the first object O1 is maintained, the firing intensity and firing distance from which a better score can be derived can be determined.
  • the content execution unit 1400 of the HMD module 1000 performs a score calculation step (S1300) of calculating a score by determining whether the first object O1 and the second object O2 are in contact.
  • a score calculation step S1300 of calculating a score by determining whether the first object O1 and the second object O2 are in contact.
  • the content execution unit 1400 of the HMD module 1000 is , a contact is determined to calculate a score, and the calculated score may be reflected in real time while the virtual reality content is being executed and displayed on the virtual reality screen as shown in (c) of FIG. 7 .
  • the virtual reality content collects the coordinate information of the user's gaze and the pupil position information while the user experiences the virtual reality content and reflects it in the virtual reality content, thereby improving the user's immersion.
  • FIG 8 schematically shows a display screen in the HMD module 1000 provided in the release step S1200 according to an embodiment of the present invention.
  • FIGS. 8 (a) and 8 (b) show a display screen of the HMD module 1000 in which a score calculated by performing the score calculation step S1300 is displayed.
  • the first object O1 of FIGS. 8(a) and 8(b) is the second object O1 that is targeted according to the user's manipulation of the controller. It is shown being fired and coming into contact with the object O2. Referring to each of these screens, it is shown that each of the first objects O1 is launched and touched the same second object O2, but each score is calculated and displayed differently. As such, even if the second object O2 from which the first object O1 is launched is the same, the score calculated by the score calculation step S1300 may be different.
  • the score calculated in the score calculation step S1300 is set differently according to a preset criterion for the area in contact with the first object O1.
  • FIG. 8(c) shows the second object O2 displayed in FIGS. 8(a) and (b).
  • the second object O2 displayed on the virtual reality screen is calculated according to a preset criterion for the area in which the first object O1 is in contact in the score calculation step (S1300).
  • the calculated score may be set differently.
  • a score for each area set according to the preset criteria is applied to the second object O2. Display.
  • a score may be assigned based on other preset criteria according to settings in virtual reality content.
  • the user may be provided with a virtual reality screen in which a score given differently for each area of the second object O2 is displayed by gathering more eyes and concentrating his/her eyes in order to obtain a higher score, and based on the displayed score information By concentrating on the target area of the second object O2 and obtaining a high score, it is possible to exert the effect of improving the immersion of the virtual reality content.
  • FIG. 9 schematically shows an internal configuration of a service server 2000 according to an embodiment of the present invention.
  • the service server 2000 of the present invention receives the gaze heat map generated by the strabismus diagnosis unit 1500 of the HMD module 1000, and the gaze heat map received by the diagnosis model learned from the learning gaze heat map data.
  • strabismus diagnosis information for As shown in FIG. 10 , the service server 2000 includes a strabismus diagnosis information extracting unit 2100 and a diagnosis model learning unit 2200 .
  • the strabismus diagnosis information derivation unit 2100 derives strabismus diagnosis information for the gaze heat map received from the HMD module 1000 through a diagnosis model using machine learning. After receiving the gaze heat map, the strabismus diagnosis information derivation unit 2100 automatically performs a diagnosis using the diagnostic model, and derives strabismus diagnosis information for the gaze heat map.
  • the diagnosis model learning unit 2200 may learn a diagnosis model for deriving strabismus diagnosis information using the learning gaze heat map data.
  • the strabismus diagnosis information for the gaze heat map received by the diagnostic model learned based on the learning gaze heat map data is derived.
  • the service server 2000 in FIG. 9 may further include elements other than the illustrated elements, but only elements related to the rehabilitation training system according to embodiments of the present invention are displayed for convenience.
  • FIG. 10 schematically illustrates a gaze heat map generated based on coordinate information of a user's gaze according to an embodiment of the present invention.
  • the strabismus diagnosis unit 1500 of the HMD module 1000 of the present invention is based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100), and the release step (S1200).
  • a strabismus diagnosis step (S1400) of generating a gaze heat map and transmitting the generated gaze heat map to the service server 2000; is performed.
  • the strabismus diagnosis unit 1500 of the HMD module 1000 generates a gaze heat map based on coordinate information of the user's gaze, and such a gaze heat map is based on the coordinate information of the user's gaze, as shown in FIG. Based on the information about the length of time the user's gaze stayed on the virtual reality screen is information displayed as an image.
  • Such a gaze heat map when comparing the viewpoint of the exotropia patient with the coordinates of the first object (O1) and the second object (O2), it is possible to grasp information about where the patient stayed and how long he stayed there have.
  • Such a gaze heat map may be displayed as two-dimensional image information as shown in FIG. 10 .
  • FIG 11 schematically shows the steps of performing the HMD module 1000 and the service server 2000 according to an embodiment of the present invention.
  • Rehabilitation training system including the HMD module 1000 and the service server 2000 of the present invention, generating a gaze heat map based on the coordinate information of the user's gaze (S200); transmitting the generated heat map to the service server 2000 (S210); deriving strabismus diagnosis information for the gaze heat map received by the diagnostic model learned based on the learning gaze heat map data (S220); Transmitting the derived strabismus diagnosis information (S230) is performed.
  • step S200 the strabismus diagnosis unit 1500 of the HMD module 1000 performs a trigger step (S1000), a targeting step (S1100), and a release step (S1200) by the content execution unit 1400.
  • a gaze heat map is generated based on the coordinate information of the user's gaze during the period.
  • the gaze heat map is information that displays information about the length of time the user's gaze stays on the virtual reality screen as an image based on the coordinate information of the user's gaze.
  • step S210 the strabismus diagnosis unit 1500 of the HMD module 1000 transmits the generated gaze heat map to the service server 2000 .
  • the service server 2000 derives strabismus diagnosis information for the gaze heat map received by the diagnostic model learned from the learning gaze heat map data.
  • the diagnosis model is learned by gaze heat map data of a plurality of exotropia patients who have performed virtual reality contents in the past to derive strabismus diagnosis information for the gaze heat map received from the HMD module 1000 .
  • the strabismus diagnosis information may include information on the user's strabismus, information on the angle of strabismus, and the frequency of occurrence of exotropia.
  • the diagnostic model can analyze the gaze heat map using artificial neural network technology that includes temporal concepts such as RNN, LSTM, and GRU, and the diagnostic model includes one or more deep learning-based trained artificial neural network modules. can do.
  • the service server 2000 transmits the derived strabismus diagnosis information.
  • the derived strabismus diagnosis information may be transmitted to the HMD module 1000 to display the strabismus diagnosis information in the HMD module 1000, or may be transmitted to a user's terminal or a user's guardian or a specialist's terminal to be utilized for exotropia diagnosis. .
  • the HMD module 1000 generates a gaze heat map based on the coordinate information and pupil position information of the user's gaze and transmits it to the service server 2000, and the service server 2000 transmits the gaze heat map.
  • the service server 2000 transmits the gaze heat map.
  • FIG 12 schematically illustrates the operation of the diagnostic model learning unit 2200 of the service server 2000 according to an embodiment of the present invention.
  • the service server 2000 of the present invention receives the gaze heat map generated by the strabismus diagnosis unit 1500 of the HMD module 1000, and the gaze heat map received by the diagnosis model learned from the learning gaze heat map data.
  • strabismus diagnosis information for As shown in (a) of FIG. 12 the service server 2000 includes a diagnosis model learning unit 2200, and the diagnosis model learning unit 2200 is configured to learn a diagnosis model based on the learning gaze heat map data.
  • the learning gaze heat map data for learning the diagnostic model may be gaze heat map data of a plurality of exotropia patients using the rehabilitation training system of the present invention as shown in FIG. 12( b ).
  • a plurality of exotropia patients with different strabismus information including the strabismus developed eye and strabismus angle in the past performed virtual reality content in the past, and a plurality of gaze heatmaps are used as learning gaze heatmap data to learn the diagnostic model.
  • the learning gaze heat map data is gaze heat map data of a plurality of exotropia patients using the past rehabilitation system stored in the service server 2000 .
  • a diagnosis result for the gaze heat map received from the HMD module 1000 may be received, and the gaze heat map including the diagnosis result may be utilized as learning gaze heat map data for learning the diagnostic model.
  • FIG. 13 exemplarily shows an internal configuration of a computing device according to an embodiment of the present invention.
  • the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( I/O subsystem) 11400 , a power circuit 11500 , and a communication circuit 11600 may be included at least.
  • the computing device 11000 may correspond to the service server 2000 or the HMD module 1000 .
  • the memory 11200 may include, for example, a high-speed random access memory, a magnetic disk, an SRAM, a DRAM, a ROM, a flash memory, or a non-volatile memory. have.
  • the memory 11200 may include a software module required for the operation of the computing device 11000 , an instruction set, or other various data included in the learned embedding model.
  • access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100 .
  • Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 .
  • the processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
  • the input/output subsystem 11400 may couple various input/output peripherals to the peripheral interface 11300 .
  • the input/output subsystem 11400 may include a controller for coupling a peripheral device such as a monitor, keyboard, mouse, printer, or a touch screen or sensor as required to the peripheral interface 11300 .
  • input/output peripherals may be coupled to peripheral interface 11300 without going through input/output subsystem 11400 .
  • the power circuit 11500 may supply power to all or some of the components of the terminal.
  • the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
  • the communication circuit 11600 may enable communication with another computing device using at least one external port.
  • the communication circuit 11600 may include an RF circuit to transmit and receive an RF signal, also known as an electromagnetic signal, to enable communication with other computing devices.
  • an RF signal also known as an electromagnetic signal
  • FIG. 13 is only an example of the computing device 11000 , and the computing device 11000 may omit some components shown in FIG. 13 , or further include additional components not shown in FIG. 13 , or 2 It may have a configuration or arrangement that combines two or more components.
  • a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 13 , and may include various communication methods (WiFi, 3G, LTE) in the communication circuit 1160 . , Bluetooth, NFC, Zigbee, etc.) may include a circuit for RF communication.
  • Components that may be included in the computing device 11000 may be implemented in hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
  • Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium.
  • the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal.
  • the application to which the present invention is applied may be installed in the user terminal through a file provided by the file distribution system.
  • the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request of the user terminal.
  • the device described above may be implemented as a hardware component, a software component, and/or a combination of the hardware component and the software component.
  • devices and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA). , a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers.
  • the processing device may execute an operating system (OS) and one or more software applications running on the operating system.
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • OS operating system
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that can include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
  • Software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device.
  • the software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave.
  • the software may be distributed over networked computing devices, and stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
  • - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Surgery (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Pain & Pain Management (AREA)
  • Vascular Medicine (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Rehabilitation Therapy (AREA)
  • Rehabilitation Tools (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to a virtual-reality system and method for rehabilitating exotropia patients on basis of artificial intelligence, and a computer-readable medium, and, more specifically, to a virtual-reality system and method for rehabilitating exotropia patients on basis of artificial intelligence, and a computer-readable medium, the system and the method providing virtual-reality content that enables exotropia patients to train for eye convergence, so as alleviate the boredom of repetitive training for the exotropia patients, and collecting training progress data, so as to provide strabismus diagnosis information about exotropia patients who have trained with the virtual-reality content.

Description

인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체Artificial intelligence-based virtual reality system, method and computer-readable medium for rehabilitation of exotropia patients
본 발명은 인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체에 관한 것으로서, 보다 상세하게는 외사시 환자들에게 눈 모음 훈련을 할 수 있는 가상현실 컨텐츠를 제공하여 외사시환자들의 반복적인 훈련에 대한 지루함을 개선하고, 훈련진행 데이터를 수집하여 가상현실 컨텐츠를 수행한 외사시환자에 대한 사시진단정보를 제공할 수 있는 인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체에 관한 것이다.The present invention relates to a virtual reality system, method, and computer-readable medium for rehabilitation training for exotropia patients based on artificial intelligence. Artificial intelligence-based exotropia rehabilitation virtual reality system, method, and computer- which can improve boredom for repetitive training and provide strabismus diagnosis information for exotropia patients who performed virtual reality contents by collecting training progress data It relates to a readable medium.
가상현실(Virtual Reality)이란 컴퓨터 시스템에서 생성한 3차원 가상공간과 사용자 간의 상호 작용을 이루는 기술로서, 사용자는 이러한 가상공간에서 인체의 오감(시각, 청각, 후각, 미각, 촉각)을 통해 몰입감을 느끼고, 실제로 그 공간에 존재하는 것과 같은 현실감을 제공하는 융합기술이다. 글로벌 가상현실 시장은 헤드마운트 디스플레이(Head Mount Display)가 대부분을 차지하고 있으며, 최근 다양한 형태의 가상현실 기기들이 보급되고 있고, 현재 가상현실 시장은 하드웨어(디바이스) 중심으로 관련 제품들이 확산되면서 점차 애플리케이션 등의 가상현실 컨텐츠 및 컨텐츠를 유통하는 플랫폼의 영향력이 확대될 전망이다.Virtual reality is a technology that enables interaction between a user and a three-dimensional virtual space created by a computer system. It is a convergence technology that provides a sense of reality as if it were felt and actually existed in the space. In the global virtual reality market, head mounted displays occupy most of the market, and various types of virtual reality devices are being distributed recently. The influence of platforms that distribute virtual reality contents and contents is expected to expand.
한편, 헤드마운트 디스플레이(Head Mount Display)는 사용자의 시각에 기반한 하드웨어이기 때문에 다양한 헬스케어 분야에서의 비전테라피 활용이 가능한 것으로 기대되고 있다. 비전 테라피는 시력훈련이라고도 하며, 안구 운동 장애, 약시 등의 양안시 기능의 이상, 사시 등의 조절 장애 등을 교정하며 관련 증상을 개선하는 임상적 접근방법이다. 이는 비수술적 방법으로 시각 훈련을 통해 시기능을 개선하는 다양한 방법을 포함한다. 기존의 약시 및 사시 환자들은 치료를 위해 전통적인 눈 모음 훈련과 같은 재활훈련을 진행해 왔지만 반복되는 훈련으로 지루함을 느끼기 때문에 환자의 의지에 따라 지속적인 훈련에 어려움이 존재했다.On the other hand, since the head mounted display is hardware based on the user's vision, it is expected that vision therapy can be utilized in various healthcare fields. Vision therapy, also called vision training, is a clinical approach that corrects eye movement disorders, abnormalities of binocular function such as amblyopia, and control disorders such as strabismus and improves related symptoms. This includes various methods of improving visual function through visual training in a non-surgical way. Existing patients with amblyopia and strabismus have been undergoing rehabilitation training such as traditional eye collection training for treatment.
이와 같은 문제점을 해결하기 위하여 간단한 장치만으로 공간과 시간에 제약을 받지 않고 제공할 수 있는 가상현실 체험을 통해 언제 어디서나 체험할 수 있고, 사용자가 흥미를 느낄 수 있는 시력훈련을 위한 가상현실 컨텐츠를 제공함으로써, 시기능을 개선하기 위한 목적의 연구가 진행되게 되었다.In order to solve this problem, we provide virtual reality content for eyesight training that users can experience anytime and anywhere through virtual reality experience that can be provided without restrictions in space and time with only a simple device, and users can feel interest By doing so, research for the purpose of improving visual function has been progressed.
본 발명은 외사시 환자들에게 눈 모음 훈련을 할 수 있는 가상현실 컨텐츠를 제공하여 외사시환자들의 반복적인 훈련에 대한 지루함을 개선하고, 훈련진행 데이터를 수집하여 가상현실 컨텐츠를 수행한 외사시환자에 대한 사시진단정보를 제공할 수 있는 인공지능에 기반한 외사시환자 재활훈련 가상현실 시스템, 방법 및 컴퓨터-판독가능매체를 제공하는 것을 목적으로 한다.The present invention improves the boredom of repetitive training of exotropia patients by providing virtual reality contents for eye collection training to exotropia patients, and collects training progress data to perform virtual reality contents for strabismus patients. An object of the present invention is to provide a virtual reality system, method and computer-readable medium for rehabilitation training for exotropia patients based on artificial intelligence capable of providing diagnostic information.
상기와 같은 과제를 해결하기 위하여, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템으로서, 상기 HMD모듈에서는 상기 가상현실컨텐츠가 실행될 수 있고, 상기 가상현실컨텐츠는, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트를 시작위치에서 사용자측으로 이동시키는 트리거단계; 사용자의 머리이동에 따른 HMD모듈의 방향조작에 따라 가상현실화면이 이동하는 타게팅단계; 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트를 가상현실화면에서 발사하는 릴리즈단계; 및 상기 릴리즈단계에서 발사된 제1오브젝트가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트와의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계;를 포함하는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템을 제공한다.In order to solve the above problems, as an exotropia rehabilitation system including an HMD module and a service server, the virtual reality content can be executed in the HMD module, and the virtual reality content is virtual according to the user's controller operation. a triggering step of moving the first object in a preset area of the central part of the real screen from the starting position to the user's side; A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head; a release step of emitting the first object from a virtual reality screen according to a user's controller manipulation; and a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen; including a HMD module and a service server Provides a rehabilitation training system for exotropia patients.
본 발명의 일 실시예에서는, 상기 1 이상의 제2오브젝트는 각각의 기설정된 크기를 갖도록 설정되고, 상기 제2오브젝트의 좌표는 사용자의 좌표로부터 각각의 거리를 가질 수 있다.In an embodiment of the present invention, the one or more second objects may be set to have respective preset sizes, and the coordinates of the second objects may have respective distances from the user's coordinates.
본 발명의 일 실시예에서는, 상기 HMD모듈은, 상기 가상현실화면에서의 사용자의 시선의 좌표정보, 및 동공위치정보를 수집하고, 상기 트리거단계는, 상기 제1오브젝트가 이동하는 동안의 사용자의 시선의 좌표정보 및 사용자의 동공위치정보에 기초하여 상기 제1오브젝트의 릴리즈레벨을 도출하고, 상기 릴리즈단계는, 상기 트리거단계에서 도출된 상기 릴리즈레벨에 기초하여 가상현실화면에서의 상기 제1오브젝트의 발사세기 혹은 발사거리를 결정할 수 있다.In one embodiment of the present invention, the HMD module collects the coordinate information of the user's gaze and the pupil position information on the virtual reality screen, and the triggering step includes: A release level of the first object is derived based on the coordinate information of the gaze and the user's pupil position information, and the release step includes the first object in the virtual reality screen based on the release level derived in the trigger step. You can determine the firing intensity or firing distance of
본 발명의 일 실시예에서는, 상기 HMD모듈은, 상기 가상현실화면에서의 사용자의 동공위치정보를 수집하고, 상기 트리거단계는, 상기 사용자의 동공위치가 기설정된 범위를 벗어나는 경우에, 사용자측으로 이동된 상기 제1오브젝트의 위치를 상기 시작위치로 리셋시킬 수 있다.In one embodiment of the present invention, the HMD module collects the user's pupil position information on the virtual reality screen, and the triggering step is, when the user's pupil position is out of a preset range, moves to the user's side The position of the first object may be reset to the start position.
본 발명의 일 실시예에서는, 상기 HMD모듈은, 상기 가상현실화면에서의 사용자의 시선의 좌표정보를 수집하고, 상기 릴리즈단계는, 사용자의 좌안의 시선의 좌표정보 및 우안의 시선의 좌표정보에 기초하여 기설정된 기준에 따라 릴리즈영역을 도출하고, 도출된 상기 릴리즈영역의 범위 이내에서 상기 제1오브젝트가 발사될 수 있다.In one embodiment of the present invention, the HMD module collects the coordinate information of the user's gaze on the virtual reality screen, and the releasing step is performed on the coordinate information of the user's left eye gaze and the right eye gaze coordinate information. A release area may be derived based on a preset criterion, and the first object may be launched within a range of the derived release area.
본 발명의 일 실시예에서는, 상기 가상현실컨텐츠는, 상기 트리거단계, 타게팅단계, 및 릴리즈단계에서의 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성하고, 생성된 시선히트맵을 상기 서비스서버에 전송하는 사시진단단계;를 더 포함하고, 상기 서비스서버는 학습시선히트맵 데이터에 의하여 학습된 진단모델에 의하여 수신한 상기 시선히트맵에 대한 사시진단정보를 도출할 수 있다.In an embodiment of the present invention, the virtual reality content generates a gaze heat map based on coordinate information of the user's gaze in the trigger step, targeting step, and release step, and uses the generated gaze heat map to the service server The strabismus diagnosis step of transmitting the strabismus diagnosis step to; further comprising, the service server may derive the strabismus diagnosis information for the gaze heat map received by the diagnosis model learned from the learning gaze heat map data.
상기와 같은 과제를 해결하기 위하여, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템을 이용한 외사시환자 재활훈련 방법으로서, HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트를 시작위치에서 사용자측으로 이동시키는 트리거단계; HMD모듈에 의하여, 사용자의 머리이동에 따른 HMD모듈의 방향조작에 따라 가상현실화면이 이동하는 타케팅단계; HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트를 가상현실화면에서 발사하는 릴리즈단계; 및 HMD모듈에 의하여, 상기 릴리즈단계에서 발사된 제1오브젝트가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트와의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계;를 포함하는, 외사시환자 재활훈련 방법을 제공한다.In order to solve the above problems, as a rehabilitation training method for an exotropia patient using a rehabilitation training system for an exotropia patient including an HMD module and a service server, by the HMD module, the central portion of the virtual reality screen according to the user's controller operation a trigger step of moving the first object in the set area from the start position to the user side; A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head by the HMD module; a release step of emitting the first object from the virtual reality screen according to the user's controller manipulation by the HMD module; And by the HMD module, a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen; provide training methods.
상기와 같은 과제를 해결하기 위하여, 본 발명의 일 실시예는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템을 이용한 외사시환자 재활훈련 방법을 구현하기 위한, 컴퓨터-판독가능 매체로서, 상기 컴퓨터-판독가능 매체는, 상기 외사시환자 재활훈련 시스템의 구성요소로 하여금 이하의 단계들을 수행하도록 하는 명령들을 저장하며, 상기 단계들은: HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트를 시작위치에서 사용자측으로 이동시키는 트리거단계; HMD모듈에 의하여, 사용자의 머리이동에 따른 HMD모듈의 방향조작에 따라 가상현실화면이 이동하는 타케팅단계; HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트를 가상현실화면에서 발사하는 릴리즈단계; 및 HMD모듈에 의하여, 상기 릴리즈단계에서 발사된 제1오브젝트가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트와의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계;를 포함하는, 컴퓨터-판독가능매체를 제공한다.In order to solve the above problems, an embodiment of the present invention is a computer-readable medium for implementing a method for rehabilitation of an exotropia patient using an exotropia rehabilitation system including an HMD module and a service server, wherein the The computer-readable medium stores instructions for causing the components of the exotropia rehabilitation system to perform the following steps, wherein the steps are: by the HMD module, the center of the virtual reality screen according to the user's controller operation a trigger step of moving a first object in a preset area of a part from a start position to a user side; A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head by the HMD module; a release step of emitting the first object from the virtual reality screen according to the user's controller manipulation by the HMD module; and a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen by the HMD module; Provide available media.
본 발명의 일 실시예에 따르면, 눈 모음 훈련을 할 수 있는 가상현실 컨텐츠를 제공함으로써, 외사시환자의 반복적인 훈련으로 인한 지루함을 감소시키고 보다 재활훈련에 몰입할 수 있도록 하는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by providing virtual reality content for eye collection training, it is possible to reduce boredom caused by repetitive training of exotropia patients and to be more immersed in rehabilitation training.
본 발명의 일 실시예에 따르면, 사용자의 눈 모음 정도에 따라 게임에서 구현되는 제1오브젝트의 릴리즈레벨을 다르게 함으로써, 게임의 난이도를 조절하므로, 사용자 맞춤형의 가상현실 컨텐츠를 제공할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the difficulty of the game is adjusted by varying the release level of the first object implemented in the game according to the degree of the user's eye collection, thereby providing an effect of providing customized virtual reality content. can perform
본 발명의 일 실시예에 따르면, 사용자는 자신의 시선의 좌표정보에 관련된 정보가 디스플레이되는 가상현실화면을 통해, 자신의 눈모음정도를 확인하여, 게임을 수행하며 눈 모음 훈련을 할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the user can check his/her eye consolidation level through a virtual reality screen on which information related to the coordinate information of his/her gaze is displayed, and perform a game and perform eye converging training can perform
본 발명의 일 실시예에 따르면, 가상현실 컨텐츠를 실행한 사용자의 컨텐츠 수행 정보를 인공지능에 기반하여 분석함으로써, 사시정도에 대한 사시진단정보를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by analyzing the content performance information of the user who has executed the virtual reality content based on artificial intelligence, it is possible to exhibit the effect of deriving strabismus diagnosis information on the degree of strabismus.
본 발명의 일 실시예에 따르면 가상현실 컨텐츠를 실행한 사용자의 사시진단정보를 사용자, 사용자의 보호자 혹은 전문의에게 제공할 수 있고, 사시진단정보를 사용자의 진료, 치료 및 상담데이터로 활용할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the strabismus diagnosis information of the user who executed the virtual reality content can be provided to the user, the user's guardian or a specialist, and the strabismus diagnosis information can be used as the user's treatment, treatment and consultation data. can perform
도 1은 본 발명의 일 실시예에 따른 재활훈련 시스템의 전체적인 형태를 개략적으로 도시한다.1 schematically shows the overall form of a rehabilitation training system according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 재활훈련 시스템을 이용하는 사용자가 HMD모듈을 착용한 모습을 개략적으로 도시한다.2 schematically shows a state in which a user using the rehabilitation training system according to an embodiment of the present invention wears an HMD module.
도 3은 본 발명의 일 실시예에 따른 가상현실 컨텐츠의 수행 단계를 개략적으로 도시한다.3 schematically illustrates a step of performing virtual reality content according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 트리거단계, 타게팅단계 및 릴리즈단계, 및 스코어산정단계에서 제공되는 HMD모듈에서의 디스플레이 화면을 개략적으로 도시한다.4 schematically shows a display screen in the HMD module provided in the trigger stage, the targeting stage and the release stage, and the score calculation stage according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 트리거단계에서 제공되는 HMD모듈에서의 디스플레이 화면을 개략적으로 도시한다.5 schematically shows a display screen in the HMD module provided in the trigger stage according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 릴리즈단계에서 제공되는 HMD모듈에서의 디스플레이 화면을 개략적으로 도시한다.6 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 타게팅단계, 릴리즈단계 및 스코어산정단계에서 제공되는 HMD모듈에서의 디스플레이 화면을 개략적으로 도시한다.7 schematically shows a display screen in the HMD module provided in the targeting step, the release step, and the score calculation step according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 릴리즈단계에서 제공되는 HMD모듈에서의 디스플레이 화면을 개략적으로 도시한다.7 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
도 8은 본 발명의 일 실시예에 따른 릴리즈단계에서 제공되는 HMD모듈에서의 디스플레이 화면을 개략적으로 도시한다.8 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
도 9는 본 발명의 일 실시예에 따른 서비스서버의 내부 구성을 개략적으로 도시한다.9 schematically shows an internal configuration of a service server according to an embodiment of the present invention.
도 10은 본 발명의 일 실시예에 따른 사용자의 시선의 좌표정보에 기초하여 생성된 시선히트맵을 개략적으로 도시한다.10 schematically illustrates a gaze heat map generated based on coordinate information of a user's gaze according to an embodiment of the present invention.
도 11은 본 발명의 일 실시예에 따른 HMD모듈 및 서비스서버의 수행 단계를 개략적으로 도시한다.11 schematically shows the execution steps of the HMD module and the service server according to an embodiment of the present invention.
도 12은 본 발명의 일 실시예에 따른 서비스서버의 진단모델학습부의 동작을 개략적으로 도시한다.12 schematically shows the operation of the diagnostic model learning unit of the service server according to an embodiment of the present invention.
도 13은 본 발명의 일 실시예에 따른 컴퓨팅장치를 예시적으로 도시한다.13 exemplarily shows a computing device according to an embodiment of the present invention.
아래에서는 첨부한 도면을 참조하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 본 발명의 실시예를 상세히 설명한다. 그러나 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art can easily implement them. However, the present invention may be embodied in many different forms and is not limited to the embodiments described herein. And in order to clearly explain the present invention in the drawings, parts irrelevant to the description are omitted, and similar reference numerals are attached to similar parts throughout the specification.
명세서 전체에서, 어떤 부분이 다른 부분과 "연결"되어 있다고 할 때, 이는 "직접적으로 연결"되어 있는 경우뿐 아니라, 그 중간에 다른 소자를 사이에 두고 "전기적으로 연결"되어 있는 경우도 포함한다. 또한 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다.Throughout the specification, when a part is "connected" with another part, this includes not only the case of being "directly connected" but also the case of being "electrically connected" with another element interposed therebetween. . In addition, when a part "includes" a certain component, this means that other components may be further included rather than excluding other components unless otherwise stated.
또한, 제1, 제2 등과 같이 서수를 포함하는 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되지는 않는다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다. 및/또는 이라는 용어는 복수의 관련된 기재된 항목들의 조합 또는 복수의 관련된 기재된 항목들 중의 어느 항목을 포함한다.Also, terms including an ordinal number, such as first, second, etc., may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
본 명세서에 있어서 '부(部)'란, 하드웨어에 의해 실현되는 유닛(unit), 소프트웨어에 의해 실현되는 유닛, 양방을 이용하여 실현되는 유닛을 포함한다. 또한, 1 개의 유닛이 2 개 이상의 하드웨어를 이용하여 실현되어도 되고, 2 개 이상의 유닛이 1 개의 하드웨어에 의해 실현되어도 된다. 한편, '~부'는 소프트웨어 또는 하드웨어에 한정되는 의미는 아니며, '~부'는 어드레싱 할 수 있는 저장 매체에 있도록 구성될 수도 있고 하나 또는 그 이상의 프로세서들을 재생시키도록 구성될 수도 있다. 따라서, 일 예로서 '~부'는 소프트웨어 구성요소들, 객체지향 소프트웨어 구성요소들, 클래스 구성요소들 및 태스크 구성요소들과 같은 구성요소들과, 프로세스들, 함수들, 속성들, 프로시저들, 서브루틴들, 프로그램 코드의 세그먼트들, 드라이버들, 펌웨어, 마이크로코드, 회로, 데이터, 데이터베이스, 데이터 구조들, 테이블들, 어레이들 및 변수들을 포함한다. 구성요소들과 '~부'들 안에서 제공되는 기능은 더 작은 수의 구성요소들 및 '~부'들로 결합되거나 추가적인 구성요소들과 '~부'들로 더 분리될 수 있다. 뿐만 아니라, 구성요소들 및 '~부'들은 디바이스 또는 보안 멀티미디어카드 내의 하나 또는 그 이상의 CPU들을 재생시키도록 구현될 수도 있다.In this specification, a "part" includes a unit realized by hardware, a unit realized by software, and a unit realized using both. In addition, one unit may be implemented using two or more hardware, and two or more units may be implemented by one hardware. Meanwhile, '~ unit' is not limited to software or hardware, and '~ unit' may be configured to be in an addressable storage medium or may be configured to reproduce one or more processors. Thus, as an example, '~' denotes components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, and procedures. , subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables. The functions provided in the components and '~ units' may be combined into a smaller number of components and '~ units' or further separated into additional components and '~ units'. In addition, components and '~ units' may be implemented to play one or more CPUs in a device or secure multimedia card.
이하에서 언급되는 "사용자 단말"은 네트워크를 통해 서버나 타 단말에 접속할 수 있는 컴퓨터나 휴대용 단말기로 구현될 수 있다. 여기서, 컴퓨터는 예를 들어, 웹 브라우저(WEB Browser)가 탑재된 노트북, 데스크톱(desktop), 랩톱(laptop) 등을 포함하고, 휴대용 단말기는 예를 들어, 휴대성과 이동성이 보장되는 무선 통신장치로서, PCS(Personal Communication System), GSM(Global System for Mobile communications), PDC(Personal Digital Cellular), PHS(Personal Handyphone System), PDA(Personal Digital Assistant), IMT(International Mobile Telecommunication)-2000, CDMA(Code Division Multiple Access)-2000, W-CDMA(W-Code Division Multiple Access), Wibro(Wireless Broadband Internet) 단말 등과 같은 모든 종류의 핸드헬드 (Handheld) 기반의 무선 통신 장치를 포함할 수 있다. 또한, "네트워크"는 근거리 통신망(Local Area Network;LAN), 광역 통신망(Wide Area Network; WAN) 또는 부가가치 통신망(Value Added Network; VAN) 등과 같은 유선네트워크나 이동 통신망(mobile radio communication network) 또는 위성 통신망 등과 같은 모든 종류의 무선 네트워크로 구현될 수 있다.The "user terminal" referred to below may be implemented as a computer or portable terminal that can access a server or other terminal through a network. Here, the computer includes, for example, a laptop, a desktop, and a laptop equipped with a web browser (WEB Browser), and the portable terminal is, for example, a wireless communication device that ensures portability and mobility. , PCS (Personal Communication System), GSM (Global System for Mobile communications), PDC (Personal Digital Cellular), PHS (Personal Handyphone System), PDA (Personal Digital Assistant), IMT (International Mobile Telecommunication)-2000, CDMA (Code) Division Multiple Access)-2000, W-Code Division Multiple Access (W-CDMA), Wireless Broadband Internet (Wibro) terminals, and the like may include all types of handheld-based wireless communication devices. In addition, "network" refers to a wired network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN), or a mobile radio communication network or satellite. It may be implemented as any kind of wireless network, such as a communication network.
도 1은 본 발명의 일 실시예에 따른 재활훈련 시스템의 전체적인 형태를 개략적으로 도시한다.1 schematically shows the overall form of a rehabilitation training system according to an embodiment of the present invention.
본 발명의 일 실시예에 따른 재활훈련 시스템은 HMD모듈(1000) 및 서비스서버(2000)를 포함한다. 상기 서비스서버(2000) 및 상기 HMD모듈(1000)은 1 이상의 프로세서 및 1 이상의 메모리를 포함하는 컴퓨팅 장치에 해당하고, 사용자는 본 발명의 재활훈련 시스템을 통해 제공되는 눈 모음 훈련 게임을 수행할 수 있고, 이와 같은 가상공간에서의 수행을 통하여 증상 완화 및 게임요소를 통한 흥미유발로 외사시환자의 훈련 지속성을 극대화 시킬 수 있다. 상기 서비스서버(2000)는 HMD모듈(1000)을 통해 수집한 환자의 훈련 데이터를 학습된 진단모델을 통해 분석하고 분석한 사시진단정보를 외부로 송신할 수 있다. 상기 HMD모듈(1000) 및 서비스서버(2000)는 네트워크를 통해 통신할 수 있다. 또한, HMD모듈(1000)에는 사용자의 입력 및 동작이 컨트롤러를 통해 수신될 수 있고, 수신된 사용자의 입력 및 동작은 HMD모듈(1000)에 송신되어 가상현실컨텐츠에 반영된다.The rehabilitation training system according to an embodiment of the present invention includes an HMD module 1000 and a service server 2000 . The service server 2000 and the HMD module 1000 correspond to a computing device including one or more processors and one or more memories, and a user can perform an eye collection training game provided through the rehabilitation training system of the present invention. And, through such performance in a virtual space, it is possible to maximize the training continuity of exotropia patients by alleviating symptoms and generating interest through game elements. The service server 2000 may analyze the patient's training data collected through the HMD module 1000 through the learned diagnosis model and transmit the analyzed strabismus diagnosis information to the outside. The HMD module 1000 and the service server 2000 may communicate through a network. In addition, the HMD module 1000 may receive a user's input and operation through a controller, and the received user's input and operation are transmitted to the HMD module 1000 and reflected in the virtual reality content.
상기 HMD모듈(1000)은 디스플레이부(1100), 스피커부(1200), 아이트랙커부(1300), 컨텐츠실행부(1400) 및 사시진단부(1500)를 포함한다.The HMD module 1000 includes a display unit 1100 , a speaker unit 1200 , an eye tracker unit 1300 , a content execution unit 1400 , and a strabismus diagnosis unit 1500 .
구체적으로, 상기 디스플레이부(1100)는, HMD모듈(1000)을 착용한 사용자에게 제공되는 디스플레이화면을 표시한다. 본 발명의 재활훈련 가상현실 시스템은, 트리거단계(S1000), 타게팅단계(S1100), 릴리즈단계(S1200) 및 스코어산정단계(S1300)를 포함하여 사용자의 컨트롤러 조작에 따라 제1오브젝트(O1)를 1 이상의 제2오브젝트(O2) 중 어느 하나에 발사하는 가상현실컨텐츠를 제공하고, 상기 디스플레이부(1100)는 이와 같은 가상현실컨텐츠를 제공함에 있어서 표시되는 가상현실화면을 디스플레이 한다.Specifically, the display unit 1100 displays a display screen provided to a user wearing the HMD module 1000 . Rehabilitation training virtual reality system of the present invention, including the trigger step (S1000), the targeting step (S1100), the release step (S1200) and the score calculation step (S1300) according to the user's controller operation, the first object (O1) Provides virtual reality content to be launched to any one of one or more second objects O2, and the display unit 1100 displays a virtual reality screen displayed in providing such virtual reality content.
상기 스피커부(1200)는, 상기 가상현실컨텐츠의 사운드정보를 출력하여 사용자에게 사운드정보를 제공할 수 있다.The speaker unit 1200 may provide sound information to a user by outputting sound information of the virtual reality content.
상기 아이트랙커부(1300)는, HMD모듈(1000)을 착용한 사용자의 안구운동을 감지할 수 있고, 감지한 안구운동으로부터 사용자의 시선의 좌표정보 및 동공위치정보를 수집한다. 본 발명은 외사시환자를 위한 재활훈련 시스템으로서, 제1오브젝트(O1)를 1 이상의 제2오브젝트(O2) 중 어느 하나에 발사하는 컨텐츠를 구현하고, 상기 아이트랙커부(1300)는, 가상현실컨텐츠를 체험하는 사용자의 안구운동을 감지하여 가상현실컨텐츠가 실행되는 동안의 사용자의 시선의 좌표정보 및 동공위치정보를 수집한다. 이후, 수집된 상기 시선의 좌표정보 및 동공위치정보에 관련된 정보는 사용자의 사시진단정보를 도출하는데 활용될 수 있다.The eye tracker unit 1300 may detect the eye movement of the user wearing the HMD module 1000, and collects coordinate information of the user's gaze and pupil position information from the sensed eye movement. The present invention is a rehabilitation training system for exotropia patients, and implements a content that fires a first object (O1) to any one of one or more second objects (O2), and the eye tracker unit 1300 is a virtual reality content It detects the user's eye movement and collects the coordinate information of the user's gaze and the pupil position information while the virtual reality content is running. Thereafter, the collected information related to the coordinate information of the gaze and the pupil position information may be utilized to derive the user's strabismus diagnosis information.
상기 컨텐츠실행부(1400)는, HMD모듈(1000)을 통해 가상현실컨텐츠를 실행할 수 있고, 제공된 가상현실화면에서 컨트롤러를 통한 사용자의 입력 및 동작을 수신하고, 눈 모음 훈련을 위한 게임을 실행할 수 있도록 한다. 본 발명의 일 실시예에서는, 상기 컨텐츠실행부(1400)는 트리거단계(S1000), 타겟팅단계, 릴리즈단계(S1200), 스코어산정단계(S1300), 및 사시진단단계(S1400)를 포함하는 가상현실컨텐츠를 실행한다.The content execution unit 1400 may execute virtual reality content through the HMD module 1000, receive user input and motion through a controller in the provided virtual reality screen, and execute a game for eye collection training. let it be In one embodiment of the present invention, the content execution unit 1400 virtual reality including a trigger step (S1000), a targeting step, a release step (S1200), a score calculation step (S1300), and a strabismus diagnosis step (S1400) Run the content.
상기 사시진단부(1500)는, 상기 가상현실컨텐츠의 트리거단계(S1000), 타게팅단계(S1100), 및 릴리즈단계(S1200)에서의 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성한다. 생성된 시선히트맵은 서비스서버(2000)로 송신된다.The strabismus diagnosis unit 1500 generates a gaze heat map based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100), and the release step (S1200) of the virtual reality content. The generated gaze heat map is transmitted to the service server 2000 .
한편, 서비스서버(2000)는 HMD모듈(1000)에서 생성된 시선히트맵을 수신하여, 학습된 진단모델에 의하여 수신한 시선히트맵에 대한 사시진단정보를 도출한다. 바람직하게는, 상기 서비스서버(2000)는, 사시진단정보도출부(2100) 및 진단모델학습부(2200)를 포함한다.Meanwhile, the service server 2000 receives the gaze heat map generated by the HMD module 1000 and derives strabismus diagnosis information for the received gaze heat map by the learned diagnosis model. Preferably, the service server 2000 includes a strabismus diagnosis information derivation unit 2100 and a diagnosis model learning unit 2200 .
도 1에 도시된 HMD모듈(1000) 및 서비스서버(2000)를 포함하는 재활훈련 시스템은 도시된 구성요소 외의 다른 요소들을 더 포함할 수 있으나, 편의상 본 발명의 실시예들에 따른 재활훈련 시스템과 관련된 구성요소들만을 표시하였다.The rehabilitation training system including the HMD module 1000 and the service server 2000 shown in FIG. 1 may further include elements other than the illustrated components, but for convenience, the rehabilitation training system according to embodiments of the present invention and Only relevant components are shown.
도 2는 본 발명의 일 실시예에 따른 재활훈련 시스템을 이용하는 사용자가 HMD모듈(1000)을 착용한 모습을 개략적으로 도시한다.2 schematically shows a state in which a user using the rehabilitation training system according to an embodiment of the present invention wears the HMD module 1000 .
도 2에 도시된 바에 따르면 사용자가 착용하는 HMD모듈(1000)의 일 실시예가 도시된다. 본 발명의 일 실시예에서 사용자에게 재활훈련을 위한 가상현실 컨텐츠를 제공하는 가상현실 장치는 HMD모듈(1000) 및 컨트롤러를 포함할 수 있고, 사용자는 도 2에 도시된 바와 같이 HMD모듈(1000)을 착용하고, 컨트롤러를 소지하여 컨트롤러를 통해 사용자의 동작 및 입력이 HMD모듈(1000)에 수신될 수 있고, HMD모듈(1000)을 통해 가상현실에서의 영상정보 및 사운드정보가 사용자에게 제공될 수 있다.As shown in FIG. 2 , an embodiment of the HMD module 1000 worn by the user is shown. In an embodiment of the present invention, a virtual reality device that provides virtual reality content for rehabilitation to a user may include an HMD module 1000 and a controller, and the user may use the HMD module 1000 as shown in FIG. 2 . Wearing and holding a controller, the user's motion and input can be received by the HMD module 1000 through the controller, and image information and sound information in virtual reality can be provided to the user through the HMD module 1000 have.
도 3은 본 발명의 일 실시예에 따른 컨텐츠실행부(1400)의 수행 단계를 개략적으로 도시하고, 도 4는 본 발명의 일 실시예에 따른 트리거단계(S1000), 타게팅단계(S1100) 및 릴리즈단계(S1200), 및 스코어산정단계(S1300)에서 제공되는 HMD모듈(1000)에서의 디스플레이 화면을 개략적으로 도시한다.3 schematically shows the execution steps of the content execution unit 1400 according to an embodiment of the present invention, and FIG. 4 is a trigger step (S1000), a targeting step (S1100) and a release according to an embodiment of the present invention. A display screen in the HMD module 1000 provided in the step S1200 and the score calculation step S1300 is schematically shown.
본 발명은 HMD모듈(1000) 및 서비스서버(2000)를 포함하는 외사시환자 재활훈련 시스템으로서, 상기 HMD모듈(1000)에서는 컨텐츠실행부(1400)에 의하여 상기 가상현실컨텐츠가 실행될 수 있고, 상기 가상현실컨텐츠는, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트(O1)를 시작위치에서 사용자측으로 이동시키는 트리거단계(S1000); 사용자의 머리이동에 따른 HMD모듈(1000)의 방향조작에 따라 가상현실화면이 이동하는 타게팅단계(S1100); 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트(O1)를 가상현실화면에서 발사하는 릴리즈단계(S1200); 상기 릴리즈단계(S1200)에서 발사된 제1오브젝트(O1)가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트(O2)와의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계(S1300); 및 상기 트리거단계(S1000), 타게팅단계(S1100), 및 릴리즈단계(S1200)에서의 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성하고, 생성된 시선히트맵을 상기 서비스서버(2000)에 전송하는 사시진단단계(S1400);를 포함한다.The present invention is an exotropia patient rehabilitation system including an HMD module 1000 and a service server 2000. In the HMD module 1000, the virtual reality content can be executed by the content execution unit 1400, and the virtual The real content includes: a trigger step of moving the first object O1 in a preset area of the central portion of the virtual reality screen from the start position to the user side according to the user's controller manipulation (S1000); A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module 1000 according to the movement of the user's head (S1100); A release step (S1200) of firing the first object (O1) on the virtual reality screen according to the user's controller manipulation; A score calculation step (S1300) of calculating a score by determining whether the first object (O1) fired in the release step (S1200) is in contact with one or more second objects (O2) existing in the virtual reality screen (S1300); And generating a gaze heat map based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100), and the release step (S1200), and the generated gaze heat map to the service server (2000) It includes a strabismus diagnosis step (S1400) of transmitting.
구체적으로, 트리거단계(S1000)에서는, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트(O1)를 시작위치에서 사용자측으로 이동시킨다. 도 4의 (a)에 도시된 바에 따르면 제1오브젝트(O1) 및 1 이상의 제2오브젝트(O2)가 디스플레이된다. 바람직하게는, 상기 1 이상의 제2오브젝트(O2)는 각각의 기설정된 크기를 갖도록 설정되고, 상기 제2오브젝트(O2)의 좌표는 사용자의 좌표로부터 각각의 거리를 갖는다. 도 4의 (a)에 도시된 바와 같이 제2오브젝트(O2)는 사용자의 좌표로부터 각각의 거리를 갖고, 각각 기설정된 크기 및 모양을 가진 형태로 디스플레이된다. 이와 같은 가상현실화면을 제공받은 사용자는 컨트롤러를 조작하고, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트(O1)는 기설정된 시작위치에서 사용자측으로 이동된다. 도 4의 (b), 및 (c)에 도시된 바와 같이 제1오브젝트(O1)의 좌표는 사용자의 좌표와의 거리가 가까워지는 형태로 이동되어 사용자측에 점점 가까워 진다. 이와 같이 트리거단계(S1000)에서는, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트(O1)가 시작위치에서 사용자측으로 이동된다. 사용자는 이동하는 제1오브젝트(O1)에 집중함으로써, 눈 모음 훈련을 자연스럽게 수행할 수 있다.Specifically, in the trigger step S1000, the first object O1 in the preset area of the central part of the virtual reality screen is moved from the start position to the user's side according to the user's manipulation of the controller. As shown in FIG. 4A , a first object O1 and one or more second objects O2 are displayed. Preferably, the one or more second objects O2 are set to have respective preset sizes, and the coordinates of the second objects O2 have respective distances from the user's coordinates. As shown in (a) of FIG. 4 , the second objects O2 have respective distances from the user's coordinates, and are displayed in a form having a predetermined size and shape, respectively. The user provided with such a virtual reality screen operates the controller, and according to the user's controller manipulation, the first object O1 in the preset area of the central part of the virtual reality screen is moved from the preset start position to the user's side. As shown in (b) and (c) of FIG. 4 , the coordinates of the first object O1 are moved in such a way that the distance from the coordinates of the user becomes closer and closer to the user side. In this way, in the trigger step S1000, the first object O1 in the preset area of the central part of the virtual reality screen is moved from the start position to the user's side according to the user's manipulation of the controller. The user can naturally perform eye collection training by focusing on the moving first object O1.
타게팅단계(S1100)에서는, 사용자의 머리이동에 따른 HMD모듈(1000)의 방향조작에 따라 가상현실화면이 이동한다. 상기 트리거단계(S1000)에서 제1오브젝트(O1)가 사용자측으로 모두 이동된 후, 사용자는 상기 제1오브젝트(O1)를 발사 할 타겟이 되는 제2오브젝트(O2)를 바라보게 되고, 사용자의 머리이동에 따른 HMD모듈(1000)의 방향조작에 따라 도 4의 (d)에 도시된 바와 같이 가상현실화면이 이동한다.In the targeting step (S1100), the virtual reality screen moves according to the direction manipulation of the HMD module 1000 according to the movement of the user's head. After the first object O1 is all moved toward the user in the trigger step S1000, the user looks at the second object O2, which is the target to fire the first object O1, and the user's head According to the direction manipulation of the HMD module 1000 according to the movement, the virtual reality screen moves as shown in (d) of FIG. 4 .
상기 릴리즈단계(S1200)에서는, 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트(O1)를 가상현실화면에서 발사한다. 상기 타게팅단계(S1100)(S1300)에서 이동된 가상현실화면에서 사용자가 제1오브젝트(O1)를 발사하기 위한 컨트롤러 조작을 수행하면, 도 4의 (e)에 도시된 바와 같이, 가상현실화면에서 제1오브젝트(O1)가 발사되고 발사된 제1오브젝트(O1)는 타겟이 된 제2오브젝트(O2)에 접촉한다.In the release step (S1200), the first object O1 is launched from the virtual reality screen according to the user's manipulation of the controller. When the user performs a controller operation for firing the first object O1 on the virtual reality screen moved in the targeting steps (S1100) (S1300), as shown in (e) of FIG. 4, in the virtual reality screen The first object O1 is launched, and the launched first object O1 comes into contact with the target second object O2.
상기 스코어산정단계(S1300)에서는, 상기 릴리즈단계(S1200)에서 발사된 제1오브젝트(O1)가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트(O2)와의 접촉여부를 판정하여 스코어를 산정한다. 도 4의 (d) 및 도 4의 (e)에 도시된 바와 같이 사용자가 릴리즈한 제1오브젝트(O1)는 타겟이 된 제2오브젝트(O2)로 발사되어 접촉되고, 스코어산정단계(S1300)에서는 발사한 제1오브젝트(O1)의 제2오브젝트(O2)와의 접촉여부를 판정하여 스코어를 산정한다.In the score calculation step (S1300), the score is calculated by determining whether the first object (O1) fired in the release step (S1200) comes into contact with one or more second objects (O2) existing in the virtual reality screen. . As shown in Figs. 4 (d) and 4 (e), the first object O1 released by the user is fired and contacted by the target second object O2, and the scoring step (S1300) The score is calculated by determining whether the fired first object O1 is in contact with the second object O2.
한편, 상기 사시진단단계(S1400)에서는, 트리거단계(S1000), 타게팅단계(S1100) 및 릴리즈단계(S1200)에서의 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성하고, 생성된 시선히트맵을 서비스서버(2000)에 전송한다. HMD모듈(1000)의 아이트랙커부(1300)는, 상기 트리거단계(S1000), 타게팅단계(S1100) 및 릴리즈단계(S1200)에서의 가상현실컨텐츠를 사용자의 시선의 좌표정보를 실시간으로 수집하고, HMD모듈(1000)의 사시진단부(1500)는, 상기 아이트랙커부(1300)로부터 수집된 사용자의 시선의 좌표정보에 기초하여 시선히트맵을 생성한다. 시선히트맵은 사용자가 가상현실 컨텐츠를 수행함에 따라 제1오브젝트(O1)의 이동에 따라, 혹은 사용자의 머리이동에 따라 변동하는 사용자의 시선정보에 기초하여 가상현실화면에서의 사용자의 시선이 가상현실화면에 머무른 시간의 길이에 대한 정보를 시각적으로 표시한 정보이다. 상기 사시진단단계(S1400)에서는 이와 같은 시선히트맵을 생성하고, 생성한 시선히트맵을 서비스서버(2000)에 전송할 수 있다.On the other hand, in the strabismus diagnosis step (S1400), a gaze heat map is generated based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100) and the release step (S1200), and the generated gaze heat map is transmitted to the service server 2000 . The eye tracker unit 1300 of the HMD module 1000 collects the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100) and the release step (S1200) in real time, The strabismus diagnosis unit 1500 of the HMD module 1000 generates a gaze heat map based on the coordinate information of the user's gaze collected from the eye tracker unit 1300 . In the gaze heat map, the user's gaze in the virtual reality screen is virtual based on the user's gaze information that changes according to the movement of the first object O1 as the user performs the virtual reality content, or according to the movement of the user's head. It is information that visually displays information about the length of time spent on the real screen. In the strabismus diagnosis step S1400 , such a gaze heat map may be generated, and the generated gaze heat map may be transmitted to the service server 2000 .
이와 같은 방식으로, 사용자가 이동하는 제1오브젝트(O1)를 통해 사용자가 눈을 모을 수 있는 환경인 가상현실화면을 제공함으로써 외사시환자의 재활훈련이 수행될 수 있도록 하고, 오브젝트를 발사한 결과에 따라 스코어가 도출되는 컨텐츠를 제공함으로써 사용자의 컨텐츠 몰입도를 향상시킬 수 있는 효과를 발휘할 수 있다.In this way, by providing a virtual reality screen, which is an environment in which the user can gather their eyes, through the first object O1 that the user moves, the rehabilitation training of the exotropia patient can be performed, and the result of firing the object By providing the content from which the score is derived according to the content, it is possible to exert the effect of improving the user's content immersion.
도 5는 본 발명의 일 실시예에 따른 트리거단계(S1000)에서 제공되는 HMD모듈(1000)에서의 디스플레이 화면을 개략적으로 도시한다.5 schematically shows a display screen in the HMD module 1000 provided in the trigger step S1000 according to an embodiment of the present invention.
구체적으로 도 5는, 전술한 트리거단계(S1000)에서 제공되는 HMD모듈(1000)에서의 디스플레이 화면을 도시한다. 도 5와 같은 디스플레이 화면은 HMD모듈(1000)의 디스플레이부(1100)에 의해 디스플레이 된다. 도 5의 (a)에 도시된 바와 같이, 제1오브젝트(O1)는 HMD모듈(1000)에서의 디스플레이되는 가상현실화면의 중앙부분에 기설정된 영역에서의 제1오브젝트(O1)가 시작위치에 디스플레이된다. 또한, 각각의 기설정된 크기를 갖도록 설정된 1 이상의 제2오브젝트(O2)가 디스플레이되고, 1 이상의 제2오브젝트(O2)의 좌표는 사용자의 좌표로부터 각각의 거리를 갖도록 디스플레이 되고 있음이 도시된다. 이후, 상기 트리거단계(S1000)에서, 도 5의 (b) 및 도 5의 (c)에서 도시된 바와 같이 제1오브젝트(O1)의 위치가 상기 시작위치에서 사용자측의 위치로 이동시킨다. 상기 1 이상의 제2오브젝트(O2)는 이동되지 않고, 사용자의 컨트롤러 조작에 따라 사용자측으로 제1오브젝트(O1)만이 이동된다.Specifically, FIG. 5 shows a display screen in the HMD module 1000 provided in the above-described trigger step (S1000). The display screen as shown in FIG. 5 is displayed by the display unit 1100 of the HMD module 1000 . As shown in (a) of FIG. 5 , the first object O1 is at the start position of the first object O1 in a preset area in the center of the virtual reality screen displayed in the HMD module 1000 . is displayed. In addition, it is shown that one or more second objects O2 set to have respective preset sizes are displayed, and the coordinates of one or more second objects O2 are displayed to have respective distances from the user's coordinates. Then, in the trigger step (S1000), as shown in FIGS. 5 (b) and 5 (c), the position of the first object O1 is moved from the starting position to the user's position. The one or more second objects O2 are not moved, and only the first objects O1 are moved toward the user according to the user's manipulation of the controller.
한편, 상기 HMD모듈(1000)은, 상기 가상현실화면에서의 사용자의 동공위치정보를 수집하고, 상기 트리거단계(S1000)는, 상기 사용자의 동공위치가 기설정된 범위를 벗어나는 경우에, 사용자측으로 이동된 상기 제1오브젝트(O1)의 위치를 상기 시작위치로 리셋시킨다. HMD모듈(1000)의 아이트랙커부(1300)는, 트리거단계(S1000)에서의 사용자의 동공위치정보를 수집하고, HMD모듈(1000)의 컨텐츠실행부(1400)에 의하여 수행되는 상기 트리거단계(S1000)에서는, 수집한 동공위치정보에 기초하여 사용자의 동공위치가 기설정된 기준에 부합하는지 여부를 판별한다. 예를 들어, 눈이 잘 모아지지 않음으로 인해 사용자의 동공위치에 따른 사시각이 기설정된 범위를 벗어난 것으로 판단되는 경우, 혹은 사시가 발현되어 사시각이 기설정된 기준 이상인 경우, 사시각이 기설정된 기준을 벗어나는 시간이 기설정된 시간 이상인 경우 등과 같은 기설정된 범위를 벗어나는 경우 등과 같은 사용자의 동공위치가 기설정된 기준에 부합하지 않는 경우에, 사용자측으로 이동된 상기 제1오브젝트(O1)의 위치를 상기 시작위치로 리셋시킨다. 도 5의 (c)와 같은 화면을 제공받은 사용자의 동공위치가 기설정된 기준에 부합하지 않는 경우, 제1오브젝트(O1)의 위치는 시작위치로 리셋되어 사용자에게 제공되는 가상현실화면은 도 5의 (d)와 같은 화면이 아닌 도 5의 (a)와 같은 화면이 다시 제공될 수 있다.On the other hand, the HMD module 1000 collects the user's pupil position information on the virtual reality screen, and in the triggering step (S1000), when the user's pupil position is out of a preset range, it moves to the user side. The position of the first object O1 is reset to the starting position. The eye tracker unit 1300 of the HMD module 1000 collects the user's pupil position information in the trigger step S1000, and the trigger step ( In S1000), based on the collected pupil position information, it is determined whether the pupil position of the user meets a preset criterion. For example, when it is determined that the angle of deviation according to the user's pupil position is out of a preset range due to the difficulty of collecting the eyes, or when strabismus develops and the angle of deviation is greater than or equal to a preset standard, the angle of deviation is set When the user's pupil position does not meet the preset criteria, such as when the time out of the standard is greater than or equal to the preset time, etc., the position of the first object O1 moved toward the user reset to the starting position. When the pupil position of the user provided with the screen as shown in (c) of FIG. 5 does not meet the preset criteria, the position of the first object O1 is reset to the starting position and the virtual reality screen provided to the user is shown in FIG. 5 A screen as shown in (a) of FIG. 5 may be provided again instead of the screen as shown in (d) of FIG.
또한, 상기 트리거단계(S1000)에서는, 상기 제1오브젝트(O1)가 이동하는 동안의 사용자의 시선의 좌표정보 및 사용자의 동공위치정보에 기초하여 상기 제1오브젝트(O1)의 릴리즈레벨을 도출한다. HMD모듈(1000)의 아이트랙커부(1300)는, 트리거단계(S1000)에서의 시선의 좌표정보, 및 동공위치정보를 수집하고, HMD모듈(1000)의 컨텐츠실행부(1400)는, 트리거단계(S1000)에서의 사용자의 시선의 좌표정보, 및 동공위치정보에 기초하여 기설정된 기준에 따라 제1오브젝트(O1)의 릴리즈레벨을 도출할 수 있다. 예를 들어, 사용자의 동공위치에 따른 사시각이 기설정된 범위 이내인지의 여부, 사용자의 시선의 좌표정보의 이동정보가 제1오브젝트(O1)의 이동정보와 일치하는지 여부 등과 같은 기설정된 기준에 따라 릴리즈단계(S1200) 제1오브젝트(O1)를 발사하는 기준이 되는 릴리즈레벨을 도출할 수 있다.In addition, in the triggering step (S1000), the release level of the first object O1 is derived based on the coordinate information of the user's gaze and the user's pupil position information while the first object O1 is moving. . The eye tracker unit 1300 of the HMD module 1000 collects the coordinate information of the gaze and the pupil position information in the trigger step S1000, and the content execution unit 1400 of the HMD module 1000, the trigger step The release level of the first object O1 may be derived according to a preset criterion based on the coordinate information of the user's gaze in S1000 and the pupil position information. For example, whether the angle of view according to the user's pupil position is within a preset range, whether the movement information of the coordinate information of the user's gaze coincides with the movement information of the first object O1, etc. Accordingly, it is possible to derive a release level that is a reference for emitting the first object O1 in the release step (S1200).
이와 같은 방식으로 실행되는 가상현실 컨텐츠는, 가상현실화면에 디스플레이되는 제1오브젝트(O1)를 기설정된 시작위치로부터 사용자측으로 이동시킴으로서, 사용자가 제1오브젝트(O1)를 바라보며 집중할 수 있도록 함으로써, 외사시 장애를 가진 환자가 눈 모음 훈련을 보다 흥미를 가지고 할 수 있는 효과를 발휘할 수 있고, 사용자가 가상현실 컨텐츠를 체험하는 동안, 사용자의 시선의 좌표정보 및 동공위치정보를 수집하여 가상현실컨텐츠에 반영함으로써, 사용자의 몰입도를 향상시킬 수 있는 효과를 발휘할 수 있다.The virtual reality content executed in this way moves the first object O1 displayed on the virtual reality screen from a preset start position to the user side, so that the user can focus while looking at the first object O1. It can exert the effect that patients with disabilities can do eye collection training with more interest, and while the user experiences the virtual reality content, the coordinate information of the user's gaze and the pupil position information are collected and reflected in the virtual reality content By doing so, it is possible to exert the effect of improving the user's immersion level.
도 6은 본 발명의 일 실시예에 따른 릴리즈단계에서 제공되는 HMD모듈에서의 디스플레이 화면을 개략적으로 도시한다.6 schematically shows a display screen in the HMD module provided in the release step according to an embodiment of the present invention.
발명의 재활훈련 시스템은 외사시, 혹은 간헐외사시 등과 같은 안구 운동에 따른 사시각이 바깥으로 틀어지는 외사시 장애를 가진 사용자의 눈 모음 훈련을 보조하기 위한 것을 주 목적으로 한다. 이때, 외사시환자들이 가지는 각각의 사시각에 따라 HMD모듈(1000)의 아이트랙커에 수집되는 시선의 좌표정보는 다르게 나타날 수 있다. The main purpose of the rehabilitation training system of the present invention is to assist in training the eyes of a user with an exotropia disorder in which the angle of deviation according to eye movement, such as exotropia or intermittent exotropia, is shifted to the outside. In this case, the coordinate information of the gaze collected by the eye tracker of the HMD module 1000 may be displayed differently according to each angle of view of the exotropia patients.
도 6은 각각의 다른 수집된 동공위치정보에 따라 타게팅단계(S1100)에서 도출된 릴리즈영역이 각각 다른 HMD모듈(1000)에서의 디스플레이 화면을 도시한다.도 6의 (a)는 좌안 및 우안 의 사시각이 틀어져있지 않은 상태의 사용자의 HMD모듈(1000)에 의하여 제공되는 디스플레이화면을 도시한다. 이와 같이 HMD모듈(1000)의 컨텐츠실행부(1400)는, 타게팅단계(S1100)에서, 좌안의 시선의 좌표정보(도 6의 (a)에 Left점) 및 우안의 시선의 좌표정보(도 6의 (a)의 Right점)에 기초하여 기설정된 기준에 따라 릴리즈영역을 도출할 수 있다.6 shows a display screen in the HMD module 1000 in which the release area derived in the targeting step (S1100) is different according to each different collected pupil position information. The display screen provided by the user's HMD module 1000 in a state in which the oblique angle is not shifted is shown. In this way, the content execution unit 1400 of the HMD module 1000, in the targeting step (S1100), coordinate information of the gaze of the left eye (Left point in (a) of FIG. 6) and coordinate information of the gaze of the right eye (FIG. 6) A release area can be derived according to a preset criterion based on the Right point of (a) of (a)).
도 6의 (b)는 우안의 틀어진 사시각에 따라 가상현실화면에서의 시선의 좌표정보가 다르게 디스플레이된 것이 도시된다. 도 6의 (b)에서 가상현실화면을 바라보는 사용자의 우안은 바깥으로 틀어진 사시각을 가지고, 이로 인해, 가상현실화면에서의 우안의 시선의 좌표정보(도 6의 (a)의 Right점)는 좌안의 시선의 좌표정보(도 6의 (a)에 Left점)에 비해 제1오브젝트(O1)로부터 더 먼 거리에 표시됨이 도시된다. 도 6의 (c) 역시 도 6의 (b)와 반대로 좌안의 틀어진 사시각에 따라 가상현실화면에서의 시선의 좌표정보가 다르게 디스플레이된 것이 도시된다. 이와 같이, 좌안 및 우안의 시선의 좌표정보는 각각 다르게 가상현실화면에서 디스플레이 될 수 있고, 좌안의 시선의 좌표정보 및 우안의 시선의 좌표정보에 기초하여 기설정된 기준에 따라 릴리즈영역이 도출된다. 시선의 좌표정보에 따라 도 6의 (b) 및 도 6의 (c)에서 도출된 릴리즈영역은 도 6의 (a)에서의 릴리즈영역보다 범위가 더 넓게 도출됨이 도시되고 있다.6 (b) shows that the coordinate information of the gaze on the virtual reality screen is displayed differently according to the deviation angle of the right eye. In (b) of FIG. 6, the user's right eye looking at the virtual reality screen has an outward oblique angle, and therefore, coordinate information of the right eye's gaze in the virtual reality screen (Right point in FIG. 6 (a)) It is shown that is displayed at a farther distance from the first object O1 than the coordinate information of the left eye's gaze (Left point in FIG. 6(a)). 6(c) also shows that the coordinate information of the gaze in the virtual reality screen is displayed differently according to the distorted perspective angle of the left eye, as opposed to FIG. 6(b). In this way, the coordinate information of the gaze of the left eye and the gaze of the right eye may be displayed differently on the virtual reality screen, and the release area is derived according to a preset criterion based on the coordinate information of the gaze of the left eye and the coordinate information of the gaze of the right eye. According to the coordinate information of the gaze, it is shown that the release area derived in FIGS. 6 (b) and 6 (c) has a wider range than the release area in FIG. 6 (a).
이후, 사용자의 컨트롤러 조작에 따라 도출된 상기 릴리즈영역의 범위 이내에서 상기 제1오브젝트(O1)를 발사시킨다. 상기 제1오브젝트(01)은 상기 릴리즈영역의 범위 내에서는 랜덤하게 발사될 수 있다. 따라서, 사용자는 자신의 시선을 제1오브젝트(O1)에 집중하여 두 눈을 대칭적으로 정렬할수록 또한 제1오브젝트(O1)에 시선의 좌표정보가 제1오브젝트(O1)의 영역을 벗어나지 않도록 두 눈을 정렬할수록 제1오브젝트(O1)의 영역을 벗어나지 않는 범위에서 릴리즈영역이 설정되도록 할 수 있다. 이후, 릴리즈단계(S1200)에서는, 상기 릴리즈영역의 범위 이내에서 상기 제1오브젝트(O1)를 랜덤하게 발사시키고, 상기 릴리즈영역의 범위가 제1오브젝트(O1)와의 영역과 오차범위가 적을수록 정확하게 제1오브젝트(O1)가 제2오브젝트(O2)를 향해 발사될 수 있다.Thereafter, the first object O1 is emitted within the range of the release area derived according to the user's manipulation of the controller. The first object 01 may be randomly launched within the range of the release area. Therefore, the user focuses his/her gaze on the first object O1 and aligns the two eyes symmetrically so that the coordinate information of the gaze on the first object O1 does not deviate from the area of the first object O1. As the eyes are aligned, the release area may be set within a range that does not deviate from the area of the first object O1. Thereafter, in the release step (S1200), the first object O1 is randomly fired within the range of the release area, and the smaller the range of the release area is from the area with the first object O1, the more accurately it is. The first object O1 may be launched toward the second object O2.
바람직하게는, 상기 릴리즈영역은 트리거단계(S1000)에서의 제1오브젝트(O1)가 이동하는 동안의 사용자의 동공위치의 평균값에 기초하여 기설정된 기준에 따라 기설정된 시간동안 디스플레이될 수 있다. 본 발명의 일 실시예에서는, 상기 트리거단계(S1000)에서의 제1오브젝트가 이동하는 동안의 사용자의 동공위치가 대칭적으로 정렬되어 사시각의 틀어짐 없이 유지되는 경우에는 상기 릴리즈영역을 긴 시간동안 디스플레이하고, 상기 제1오브젝트(O1)가 이동하는 동안 사용자의 동공위치가 사시각으로 틀어져 동공위치가 변화하는 경우에는, 기설정된 기준에 따라 보다 짧은 시간동안 상기 릴리즈영역이 디스플레이 될 수 있다. 또한, 트리거단계(S1000)에서의 제1오브젝트(O1)가 이동하는 동안의 사용자의 시선의 좌표정보의 평균값에 기초하여 기설정된 기준에 따라 기설정된 시간동안 디스플레이될 수도 있다.Preferably, the release area may be displayed for a preset time according to a preset criterion based on an average value of the user's pupil position while the first object O1 moves in the triggering step S1000. In one embodiment of the present invention, when the user's pupil position is symmetrically aligned while the first object moves in the trigger step (S1000) and is maintained without deviation of the oblique angle, the release area is maintained for a long time. When the user's pupil position is changed at an oblique angle while the first object O1 is moving and the pupil position is changed, the release region may be displayed for a shorter time according to a preset criterion. In addition, the first object O1 in the trigger step ( S1000 ) may be displayed for a preset time according to a preset criterion based on an average value of coordinate information of the user's gaze while moving.
도 7은 본 발명의 일 실시예에 따른 타게팅단계(S1100), 릴리즈단계(S1200) 및 스코어산정단계(S1300)에서 제공되는 HMD모듈(1000)에서의 디스플레이 화면을 개략적으로 도시한다.7 schematically shows a display screen in the HMD module 1000 provided in the targeting step (S1100), the release step (S1200), and the score calculation step (S1300) according to an embodiment of the present invention.
한편, 제1오브젝트(O1)가 사용자측으로 기설정된 최종위치에 이동하게 되면 상기 가상현실화면에는 상기 도 5의 (d)에 도시된 바와 같이, 사용자가 수행할 동작에 대한 명령이 디스플레이 될 수 있다. 혹은 HMD모듈(1000)의 스피커부(1200)를 통해 수행할 동작에 대한 음성이 출력되어 전달될 수 있다. 가상현실화면에서 수행할 동작에 대한 명령을 수신한 사용자는 해당 명령을 수행하기 위하여 주변을 둘러보고 타게팅단계(S1100)에서는, 사용자의 머리이동에 따라 HMD모듈(1000)에서의 방향조작이 수행되고, HMD모듈(1000)의 방향조작에 따라 도 7의 (a)에 도시된 바와 가상현실화면이 이동된다. 가상현실화면이 이동된 후에는, 상기 가상현실컨텐츠에서는 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트(O1)를 가상현실화면에서 발사하는 릴리즈단계(S1200)가 수행된다. 사용자는 제1오브젝트(O1)를 발사하기 위해 컨트롤러를 조작(예를 들어, 버튼을 푸쉬)하고, 이와 같은 사용자의 입력에 따라 상기 제1오브젝트(O1)가 도 7의 (b)에 도시된 바와 같이 제2오브젝트(O2)가 디스플레이된 방향으로 발사되어 상기 제1오브젝트(O1) 및 제2오브젝트(O2)는 접촉된다.On the other hand, when the first object O1 moves to the final position preset by the user, a command for an operation to be performed by the user may be displayed on the virtual reality screen as shown in FIG. 5D . . Alternatively, a voice for an operation to be performed may be output and transmitted through the speaker unit 1200 of the HMD module 1000 . The user, who has received the command for the operation to be performed on the virtual reality screen, looks around to perform the command, and in the targeting step (S1100), the direction manipulation in the HMD module 1000 is performed according to the movement of the user's head, and , the virtual reality screen is moved as shown in (a) of FIG. 7 according to the direction manipulation of the HMD module 1000 . After the virtual reality screen is moved, in the virtual reality content, a release step (S1200) of emitting the first object O1 from the virtual reality screen according to the user's controller operation is performed. The user manipulates the controller (eg, pushes a button) to fire the first object O1, and according to the user's input, the first object O1 is As shown, the second object O2 is emitted in the displayed direction so that the first object O1 and the second object O2 come into contact with each other.
한편, 릴리즈단계(S1200)에서는, 상기 트리거단계(S1000)에서 도출된 릴리즈레벨에 기초하여 가상현실화면에서의 상기 제1오브젝트(O1)의 발사세기 혹은 발사거리를 결정한다. 전술한 트리거단계(S1000)에서는, 제1오브젝트(O1)가 이동하는 동안의 사용자의 시선의 좌표정보 및 사용자의 동공위치정보에 기초하여 상기 제1오브젝트(O1)의 릴리즈레벨을 도출한다. 상기 트리거단계(S1000)에서는, 기설정된 기준에 따라 릴리즈레벨이 도출되고, 상기 릴리즈단계(S1200)에서는 상기 트리거단계(S1000)에서 도출된 상기 릴리즈레벨에 기초하여 가상현실화면에서의 상기 제1오브젝트(O1)의 발사세기 혹은 발사거리를 결정한다. 본 발명의 일 실시예에서, 상기 발사세기 혹은 발사거리는 하기의 표 1과 같은 기설정된 기준으로 도출된 릴리즈레벨에 기초하여 도출될 수 있다.On the other hand, in the release step (S1200), based on the release level derived in the trigger step (S1000), the firing intensity or firing distance of the first object O1 in the virtual reality screen is determined. In the above-described trigger step S1000, the release level of the first object O1 is derived based on the coordinate information of the user's gaze and the user's pupil position information while the first object O1 is moving. In the trigger step (S1000), a release level is derived according to a preset criterion, and in the release step (S1200), the first object in the virtual reality screen is based on the release level derived in the trigger step (S1000). Determines the firing intensity or firing distance of (O1). In an embodiment of the present invention, the firing intensity or firing distance may be derived based on a release level derived based on a preset criterion as shown in Table 1 below.
릴리즈레벨release level 발사거리firing range 발사세기launch century
1One 10m10m approximately
22 15m 15m 약중medicine
33 20m20m middle
44 25m25m 중강Zhonggang
55 30m30m River
이와 같은 일 실시예에서, 사용자의 동공위치에 따른 사시각이 기설정된 범위 이내인지의 여부, 사용자의 시선의 좌표정보의 이동정보가 제1오브젝트(O1)의 이동정보와 일치하는지 여부 등과 같은 릴리즈레벨을 도출하는 기설정된 기준에 따라서 릴리즈레벨이 도출될 수 있다. 사용자의 동공위치에 따른 사시각이 기설정된 범위 이내인 경우, 사용자의 시선의 좌표정보의 이동정보가 제1오브젝트(O1)의 이동정보와 일치하는 경우와 같이 가상현실컨텐츠를 이용하는 사용자의 동공위치가 외사시가 발현되지 않고 유지된다고 판단되고, 사용자가 가상현실컨텐츠에 집중하여 제1오브젝트(O1)의 이동에 따른 시선이 제1오브젝트(O1)를 따라 머무른다고 판단될수록 릴리즈레벨은 상기 표 1에서 큰 값의 릴리즈레벨로 도출될 수 있다. 이와 같이 제1오브젝트(O1)의 이동에 따른 시선이 유지될수록 보다 좋은 스코어가 도출될 수 있는 발사세기 및 발사거리가 결정될 수 있다.In such an embodiment, whether the angle of view according to the pupil position of the user is within a preset range, whether the movement information of the coordinate information of the user's gaze coincides with the movement information of the first object O1, etc. A release level may be derived according to a preset criterion for deriving a level. When the angle of view according to the pupil position of the user is within a preset range, the pupil position of the user using the virtual reality content, such as when the movement information of the coordinate information of the user's gaze coincides with the movement information of the first object O1 It is determined that exotropia is not developed and is maintained, and it is determined that the user concentrates on the virtual reality content and the gaze according to the movement of the first object O1 stays along the first object O1, the release level is shown in Table 1 above. It can be derived with a large value of the release level. As described above, as the line of sight according to the movement of the first object O1 is maintained, the firing intensity and firing distance from which a better score can be derived can be determined.
이후, HMD모듈(1000)의 컨텐츠실행부(1400)는, 제1오브젝트(O1) 및 제2오브젝트(O2)의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계(S1300)를 수행한다. 도 7의 (b)에 도시된 바와 같이 사용자의 조작에 따라 발사된 제1오브젝트(O1)가 타겟이 된 제2오브젝트(O2)에 접촉되면 HMD모듈(1000)의 컨텐츠실행부(1400)는, 접촉여부를 판정하여 스코어를 산정하고, 산정된 스코어는 가상현실 컨텐츠가 실행되는 동안 실시간으로 반영되어 도 7의 (c)에 도시된 바와 같이 가상현실화면에 디스플레이 될 수 있다.Thereafter, the content execution unit 1400 of the HMD module 1000 performs a score calculation step (S1300) of calculating a score by determining whether the first object O1 and the second object O2 are in contact. As shown in (b) of FIG. 7 , when the first object O1 fired according to the user's manipulation comes into contact with the target second object O2, the content execution unit 1400 of the HMD module 1000 is , a contact is determined to calculate a score, and the calculated score may be reflected in real time while the virtual reality content is being executed and displayed on the virtual reality screen as shown in (c) of FIG. 7 .
이와 같은 방식으로, 가상현실컨텐츠는 사용자가 가상현실 컨텐츠를 체험하는 동안, 사용자의 시선의 좌표정보 및 동공위치정보를 수집하여 가상현실컨텐츠에 반영함으로써, 사용자의 몰입도를 향상시킬 수 있는 효과를 발휘할 수 있다.In this way, the virtual reality content collects the coordinate information of the user's gaze and the pupil position information while the user experiences the virtual reality content and reflects it in the virtual reality content, thereby improving the user's immersion. can perform
도 8은 본 발명의 일 실시예에 따른 릴리즈단계(S1200)에서 제공되는 HMD모듈(1000)에서의 디스플레이 화면을 개략적으로 도시한다.8 schematically shows a display screen in the HMD module 1000 provided in the release step S1200 according to an embodiment of the present invention.
구체적으로 도 8의 (a) 및 도 8의 (b)는 스코어산정단계(S1300)의 수행을 통해 산정되는 스코어가 표시된 HMD모듈(1000)에서의 디스플레이 화면을 도시한다. 도 8의 (a)에 도시된 바에 따르면, 도 8의 (a) 및 도 8의 (b)의 제1오브젝트(O1)는 사용자의 컨트롤러 조작에 따라 제1오브젝트(O1)가 타게팅 된 제2오브젝트(O2)로 발사되어 접촉되고 있음이 도시된다. 이와 같은 각각의 화면을 참조하면, 각각의 제1오브젝트(O1)는 동일한 제2오브젝트(O2)에 발사되어 접촉되었지만, 각각의 스코어는 다르게 산정되어 디스플레이 되고 있음이 도시된다. 이와 같이, 제1오브젝트(O1)가 발사된 제2오브젝트(O2)가 동일하더라도 스코어산정단계(S1300)에 의하여 산정되는 스코어는 다를 수 있다. 바람직하게는, 상기 제2오브젝트(O2)는, 상기 제1오브젝트(O1)가 접촉되는 영역에 대한 기설정된 기준에 따라 상기 스코어산정단계(S1300)에서 산정되는 스코어가 다르게 설정된다. 도 8의 (c)는 도 8의 (a) 및 (b)에 디스플레이된 제2오브젝트(O2)를 도시한다. 도 8의 (c)에 도시된 바와 같이 가상현실화면에 디스플레이 되는 제2오브젝트(O2)는 상기 제1오브젝트(O1)가 접촉되는 영역에 대한 기설정된 기준에 따라 상기 스코어산정단계(S1300)에서 산정되는 스코어가 다르게 설정될 수 있다.Specifically, FIGS. 8 (a) and 8 (b) show a display screen of the HMD module 1000 in which a score calculated by performing the score calculation step S1300 is displayed. As shown in FIG. 8(a), the first object O1 of FIGS. 8(a) and 8(b) is the second object O1 that is targeted according to the user's manipulation of the controller. It is shown being fired and coming into contact with the object O2. Referring to each of these screens, it is shown that each of the first objects O1 is launched and touched the same second object O2, but each score is calculated and displayed differently. As such, even if the second object O2 from which the first object O1 is launched is the same, the score calculated by the score calculation step S1300 may be different. Preferably, in the second object O2, the score calculated in the score calculation step S1300 is set differently according to a preset criterion for the area in contact with the first object O1. FIG. 8(c) shows the second object O2 displayed in FIGS. 8(a) and (b). As shown in (c) of FIG. 8, the second object O2 displayed on the virtual reality screen is calculated according to a preset criterion for the area in which the first object O1 is in contact in the score calculation step (S1300). The calculated score may be set differently.
더욱 바람직하게는, 상기 타게팅단계(S1100)에서는, 사용자의 시선의 좌표정보 및 동공위치정보가 기설정된 기준에 부합되는 경우, 기설정된 기준에 따라 설정된 영역별스코어를 상기 제2오브젝트(O2)에 디스플레이 한다. 또한 도 8의 (c)에는 제2오브젝트(O2)의 중앙위치 일수록 높은 점수를 부여하는 것이 도시되었지만, 가상현실컨텐츠에서의 설정에 따라 다른 기설정된 기준으로 스코어를 부여할 수도 있다.More preferably, in the targeting step (S1100), when the coordinate information of the user's gaze and the pupil position information meet the preset criteria, a score for each area set according to the preset criteria is applied to the second object O2. Display. In addition, although it is shown in FIG. 8(c) that a higher score is given as the center position of the second object O2 is increased, a score may be assigned based on other preset criteria according to settings in virtual reality content.
사용자는 보다 높은 점수를 획득하기 위하여 보다 눈을 모으고 시선을 집중시킴으로써, 제2오브젝트(O2)의 영역별로 다르게 부여되는 스코어가 디스플레이된 가상현실화면을 제공받을 수 있고, 디스플레이된 스코어정보에 기초하여 제2오브젝트(O2)의 타겟영역에 집중하여 높은 점수를 얻음으로써, 가상현실컨텐츠의 몰입도를 향상시킬 수 있는 효과를 발휘할 수 있다.The user may be provided with a virtual reality screen in which a score given differently for each area of the second object O2 is displayed by gathering more eyes and concentrating his/her eyes in order to obtain a higher score, and based on the displayed score information By concentrating on the target area of the second object O2 and obtaining a high score, it is possible to exert the effect of improving the immersion of the virtual reality content.
도 9는 본 발명의 일 실시예에 따른 서비스서버(2000)의 내부 구성을 개략적으로 도시한다.9 schematically shows an internal configuration of a service server 2000 according to an embodiment of the present invention.
본 발명의 서비스서버(2000)는 HMD모듈(1000)의 사시진단부(1500)에 의하여 생성된 시선히트맵을 수신하고, 학습시선히트맵 데이터에 의하여 학습된 진단모델에 의하여 수신한 시선히트맵에 대한 사시진단정보를 도출한다. 도 10에 도시된 바와 같이 상기 서비스서버(2000)는 사시진단정보도출부(2100) 및 진단모델학습부(2200)를 포함한다.The service server 2000 of the present invention receives the gaze heat map generated by the strabismus diagnosis unit 1500 of the HMD module 1000, and the gaze heat map received by the diagnosis model learned from the learning gaze heat map data. strabismus diagnosis information for As shown in FIG. 10 , the service server 2000 includes a strabismus diagnosis information extracting unit 2100 and a diagnosis model learning unit 2200 .
상기 사시진단정보도출부(2100)는, 기계학습을 이용한 진단모델을 통하여 HMD모듈(1000)로부터 수신한 시선히트맵에 대한 사시진단정보를 도출한다. 시선히트맵을 수신한 후 사시진단정보도출부(2100)는 진단모델을 이용하여 자동적으로 진단을 수행하고, 시선히트맵에 대한 사시진단정보를 도출한다.The strabismus diagnosis information derivation unit 2100 derives strabismus diagnosis information for the gaze heat map received from the HMD module 1000 through a diagnosis model using machine learning. After receiving the gaze heat map, the strabismus diagnosis information derivation unit 2100 automatically performs a diagnosis using the diagnostic model, and derives strabismus diagnosis information for the gaze heat map.
상기 진단모델학습부(2200)는, 학습시선히트맵 데이터를 이용하여 사시진단정보를 도출하기 위한 진단모델을 학습시킬 수 있다. 학습시선히트맵 데이터에 의하여 학습된 진단모델에 의하여 수신한 상기 시선히트맵에 대한 사시진단정보가 도출된다.The diagnosis model learning unit 2200 may learn a diagnosis model for deriving strabismus diagnosis information using the learning gaze heat map data. The strabismus diagnosis information for the gaze heat map received by the diagnostic model learned based on the learning gaze heat map data is derived.
도 9에 서비스서버(2000)는 도시된 구성요소 외의 다른 요소들을 더 포함할 수 있으나, 편의상 본 발명의 실시예들에 따른 재활훈련 시스템과 관련된 구성요소들만을 표시하였다.The service server 2000 in FIG. 9 may further include elements other than the illustrated elements, but only elements related to the rehabilitation training system according to embodiments of the present invention are displayed for convenience.
도 10은 본 발명의 일 실시예에 따른 사용자의 시선의 좌표정보에 기초하여 생성된 시선히트맵을 개략적으로 도시한다.10 schematically illustrates a gaze heat map generated based on coordinate information of a user's gaze according to an embodiment of the present invention.
구체적으로, 본 발명의 HMD모듈의(1000)의 사시진단부(1500)는, 상기 트리거단계(S1000), 타게팅단계(S1100), 및 릴리즈단계(S1200)에서의 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성하고, 생성된 시선히트맵을 상기 서비스서버(2000)에 전송하는 사시진단단계(S1400);를 수행한다. HMD모듈(1000)의 사시진단부(1500)는, 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성하고, 이와 같은 시선히트맵은 도 10에 도시된 바와 같이, 사용자의 시선의 좌표정보에 기초하여 사용자의 시선이 가상현실화면에 머문 시간의 길이에 대한 정보를 이미지로 표시한 정보이다. 이와 같은 시선히트맵을 통하여 외사시환자의 시점이 제1오브젝트(O1) 및 제2오브젝트(O2)의 좌표와 비교할 때 어느 위치에 머물렀는지 해당 위치에 얼마나 오랜 시간 머물렀는지에 대한 정보를 파악할 수 있다. 이와 같은 시선히트맵은 도 10에 도시된 바와 같이 2차원 이미지 정보로 나타날 수 있다.Specifically, the strabismus diagnosis unit 1500 of the HMD module 1000 of the present invention is based on the coordinate information of the user's gaze in the trigger step (S1000), the targeting step (S1100), and the release step (S1200). A strabismus diagnosis step (S1400) of generating a gaze heat map and transmitting the generated gaze heat map to the service server 2000; is performed. The strabismus diagnosis unit 1500 of the HMD module 1000 generates a gaze heat map based on coordinate information of the user's gaze, and such a gaze heat map is based on the coordinate information of the user's gaze, as shown in FIG. Based on the information about the length of time the user's gaze stayed on the virtual reality screen is information displayed as an image. Through this gaze heat map, when comparing the viewpoint of the exotropia patient with the coordinates of the first object (O1) and the second object (O2), it is possible to grasp information about where the patient stayed and how long he stayed there have. Such a gaze heat map may be displayed as two-dimensional image information as shown in FIG. 10 .
도 11은 본 발명의 일 실시예에 따른 HMD모듈(1000) 및 서비스서버(2000)의 수행 단계를 개략적으로 도시한다.11 schematically shows the steps of performing the HMD module 1000 and the service server 2000 according to an embodiment of the present invention.
본 발명의 HMD모듈(1000) 및 서비스서버(2000)를 포함하는 재활훈련시스템은, 사용자의 시선의 좌표정보에 기초하여 시선히트맵을 생성하는 단계(S200); 생성한 히트맵을 서비스서버(2000)에 전송하는 단계(S210); 학습시선히트맵 데이터에 의하여 학습된 진단모델에 의하여 수신한 시선히트맵에 대한 사시진단정보를 도출하는 단계(S220); 도출한 사시진단정보를 송신하는 단계(S230)를 수행한다.Rehabilitation training system including the HMD module 1000 and the service server 2000 of the present invention, generating a gaze heat map based on the coordinate information of the user's gaze (S200); transmitting the generated heat map to the service server 2000 (S210); deriving strabismus diagnosis information for the gaze heat map received by the diagnostic model learned based on the learning gaze heat map data (S220); Transmitting the derived strabismus diagnosis information (S230) is performed.
구체적으로, S200단계에서는, HMD모듈(1000)의 사시진단부(1500)는, 컨텐츠실행부(1400)에 의하여 트리거단계(S1000), 타게팅단계(S1100), 및 릴리즈단계(S1200)가 수행되는 동안의 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성한다. 상기 시선히트맵은 사용자의 시선의 좌표정보에 기초하여 사용자의 시선이 가상현실화면에 머문 시간의 길이에 대한 정보를 이미지로 표시한 정보이다.Specifically, in step S200, the strabismus diagnosis unit 1500 of the HMD module 1000 performs a trigger step (S1000), a targeting step (S1100), and a release step (S1200) by the content execution unit 1400. A gaze heat map is generated based on the coordinate information of the user's gaze during the period. The gaze heat map is information that displays information about the length of time the user's gaze stays on the virtual reality screen as an image based on the coordinate information of the user's gaze.
S210단계에서는, HMD모듈(1000)의 사시진단부(1500)는, 생성한 시선히트맵을 서비스서버(2000)로 송신한다.In step S210 , the strabismus diagnosis unit 1500 of the HMD module 1000 transmits the generated gaze heat map to the service server 2000 .
S220단계에서는, 서비스서버(2000)는 학습시선히트맵 데이터에 의하여 학습된 진단모델에 의하여 수신한 시선히트맵에 대한 사시진단정보를 도출한다. 진단모델은 과거의 가상현실컨텐츠를 수행한 다수의 외사시환자들의 시선히트맵 데이터에 의하여 학습되어 HMD모듈(1000)로부터 수신한 시선히트맵에 대한 사시진단정보를 도출한다. 바람직하게는 사시진단정보는, 사용자의 사시안에 대한 정보, 사시각에 대한 정보, 및 외사시의 발현 빈도수를 포함할 수 있다. 상기 진단모델은, RNN, LSTM, GRU와 같은 시간적 개념이 포함된 인공신경망 기술을 이용하여 상기 시선히트맵을 분석할 수 있고, 상기 진단모델은 1 이상의 딥러닝 기반의 학습된 인공신경망 모듈을 포함할 수 있다.In step S220, the service server 2000 derives strabismus diagnosis information for the gaze heat map received by the diagnostic model learned from the learning gaze heat map data. The diagnosis model is learned by gaze heat map data of a plurality of exotropia patients who have performed virtual reality contents in the past to derive strabismus diagnosis information for the gaze heat map received from the HMD module 1000 . Preferably, the strabismus diagnosis information may include information on the user's strabismus, information on the angle of strabismus, and the frequency of occurrence of exotropia. The diagnostic model can analyze the gaze heat map using artificial neural network technology that includes temporal concepts such as RNN, LSTM, and GRU, and the diagnostic model includes one or more deep learning-based trained artificial neural network modules. can do.
S230단계에서는, 서비스서버(2000)는 도출된 사시진단정보를 송신한다. 도출된 사시진단정보는 HMD모듈(1000)로 송신되어 HMD모듈(1000)에서 사시진단정보를 디스플레이 할 수도 있고, 사용자의 단말기 혹은 사용자의 보호자 혹은 전문의의 단말기로 송신되어 외사시 진단에 활용될 수도 있다.In step S230, the service server 2000 transmits the derived strabismus diagnosis information. The derived strabismus diagnosis information may be transmitted to the HMD module 1000 to display the strabismus diagnosis information in the HMD module 1000, or may be transmitted to a user's terminal or a user's guardian or a specialist's terminal to be utilized for exotropia diagnosis. .
이와 같은 방식으로, HMD모듈(1000)에서는 사용자의 시선의 좌표정보 및 동공위치정보에 기초하여 시선히트맵을 생성하여 서비스서버(2000)로 송신하고, 서비스서버(2000)는 송신한 시선히트맵에 대한 사시진단정보를 도출하여 HMD모듈(1000) 혹은 외부의 단말기로 송신함으로써, 사시진단정보를 사용자의 진료, 치료 및 상담데이터로 활용할 수 있는 효과를 발휘할 수 있다.In this way, the HMD module 1000 generates a gaze heat map based on the coordinate information and pupil position information of the user's gaze and transmits it to the service server 2000, and the service server 2000 transmits the gaze heat map. By deriving strabismus diagnosis information for the strabismus and transmitting it to the HMD module 1000 or an external terminal, it is possible to exert the effect of utilizing the strabismus diagnosis information as the user's treatment, treatment and consultation data.
도 12는 본 발명의 일 실시예에 따른 서비스서버(2000)의 진단모델학습부(2200)의 동작을 개략적으로 도시한다.12 schematically illustrates the operation of the diagnostic model learning unit 2200 of the service server 2000 according to an embodiment of the present invention.
본 발명의 서비스서버(2000)는 HMD모듈(1000)의 사시진단부(1500)에 의하여 생성된 시선히트맵을 수신하고, 학습시선히트맵 데이터에 의하여 학습된 진단모델에 의하여 수신한 시선히트맵에 대한 사시진단정보를 도출한다. 도 12의 (a)에 도시된 바와 같이, 서비스서버(2000)는 진단모델학습부(2200)를 포함하고, 상기 진단모델학습부(2200)는 학습시선히트맵 데이터에 의하여 진단모델을 학습시킬 수 있다. 이와 같이 진단모델을 학습시키는 상기 학습시선히트맵 데이터는, 도 12의 (b)에 도시된 바와 같은 본 발명의 재활훈련 시스템을 이용한 복수의 외사시환자의 시선히트맵 데이터일 수 있다. 사시가 발현된 눈 및 사시각을 포함하는 사시정보가 다른 복수의 외사시환자들이 과거에 가상현실 컨텐츠를 수행하는 과정에서 도출된 복수의 시선히트맵이 학습시선히트맵 데이터로 이용되어 진단모델이 학습될 수 있다. 바람직하게는, 상기 학습시선히트맵 데이터는, 서비스서버(2000)에 기저장된 과거의 재활훈련 시스템을 이용한 복수의 외사시환자의 시선히트맵 데이터이다. 혹은, HMD모듈(1000)로부터 수신한 시선히트맵에 대한 진단결과를 수신하고, 진단결과가 포함된 상기 시선히트맵이 상기 진단모델을 학습시키는 학습시선히트맵 데이터로서 활용될 수 있다.The service server 2000 of the present invention receives the gaze heat map generated by the strabismus diagnosis unit 1500 of the HMD module 1000, and the gaze heat map received by the diagnosis model learned from the learning gaze heat map data. strabismus diagnosis information for As shown in (a) of FIG. 12 , the service server 2000 includes a diagnosis model learning unit 2200, and the diagnosis model learning unit 2200 is configured to learn a diagnosis model based on the learning gaze heat map data. can As described above, the learning gaze heat map data for learning the diagnostic model may be gaze heat map data of a plurality of exotropia patients using the rehabilitation training system of the present invention as shown in FIG. 12( b ). A plurality of exotropia patients with different strabismus information including the strabismus developed eye and strabismus angle in the past performed virtual reality content in the past, and a plurality of gaze heatmaps are used as learning gaze heatmap data to learn the diagnostic model. can be Preferably, the learning gaze heat map data is gaze heat map data of a plurality of exotropia patients using the past rehabilitation system stored in the service server 2000 . Alternatively, a diagnosis result for the gaze heat map received from the HMD module 1000 may be received, and the gaze heat map including the diagnosis result may be utilized as learning gaze heat map data for learning the diagnostic model.
도 13 본 발명의 일 실시예에 따른 컴퓨팅장치의 내부 구성을 예시적으로 도시한다.13 exemplarily shows an internal configuration of a computing device according to an embodiment of the present invention.
도 13에 도시한 바와 같이, 컴퓨팅 장치(11000)는 적어도 하나의 프로세서(processor)(11100), 메모리(memory)(11200), 주변장치 인터페이스(peripheral interface)(11300), 입/출력 서브시스템(I/Osubsystem)(11400), 전력 회로(11500) 및 통신 회로(11600)를 적어도 포함할 수 있다. 이때, 컴퓨팅 장치(11000)는 서비스서버(2000) 혹은 HMD모듈(1000)에 해당될 수 있다.13, the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( I/O subsystem) 11400 , a power circuit 11500 , and a communication circuit 11600 may be included at least. In this case, the computing device 11000 may correspond to the service server 2000 or the HMD module 1000 .
메모리(11200)는, 일례로 고속 랜덤 액세스 메모리(high-speed random access memory), 자기 디스크, 에스램(SRAM), 디램(DRAM), 롬(ROM), 플래시 메모리 또는 비휘발성 메모리를 포함할 수 있다. 메모리(11200)는 컴퓨팅 장치(11000)의 동작에 필요한 소프트웨어 모듈, 명령어 집합 또는 학습된 임베딩모델에 포함하는 그밖에 다양한 데이터를 포함할 수 있다.The memory 11200 may include, for example, a high-speed random access memory, a magnetic disk, an SRAM, a DRAM, a ROM, a flash memory, or a non-volatile memory. have. The memory 11200 may include a software module required for the operation of the computing device 11000 , an instruction set, or other various data included in the learned embedding model.
이때, 프로세서(11100)나 주변장치 인터페이스(11300) 등의 다른 컴포넌트에서 메모리(11200)에 액세스하는 것은 프로세서(11100)에 의해 제어될 수 있다.In this case, access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100 .
주변장치 인터페이스(11300)는 컴퓨팅 장치(11000)의 입력 및/또는 출력 주변장치를 프로세서(11100) 및 메모리 (11200)에 결합시킬 수 있다. 프로세서(11100)는 메모리(11200)에 저장된 소프트웨어 모듈 또는 명령어 집합을 실행하여 컴퓨팅 장치(11000)을 위한 다양한 기능을 수행하고 데이터를 처리할 수 있다. Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 . The processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
입/출력 서브시스템(11400)은 다양한 입/출력 주변장치들을 주변장치 인터페이스(11300)에 결합시킬 수 있다. 예를 들어, 입/출력 서브시스템(11400)은 모니터나 키보드, 마우스, 프린터 또는 필요에 따라 터치스크린이나 센서등의 주변장치를 주변장치 인터페이스(11300)에 결합시키기 위한 컨트롤러를 포함할 수 있다. 다른 측면에 따르면, 입/출력 주변장치들은 입/출력 서브시스템(11400)을 거치지 않고 주변장치 인터페이스(11300)에 결합될 수도 있다.The input/output subsystem 11400 may couple various input/output peripherals to the peripheral interface 11300 . For example, the input/output subsystem 11400 may include a controller for coupling a peripheral device such as a monitor, keyboard, mouse, printer, or a touch screen or sensor as required to the peripheral interface 11300 . According to another aspect, input/output peripherals may be coupled to peripheral interface 11300 without going through input/output subsystem 11400 .
전력 회로(11500)는 단말기의 컴포넌트의 전부 또는 일부로 전력을 공급할 수 있다. 예를 들어 전력 회로(11500)는 전력 관리 시스템, 배터리나 교류(AC) 등과 같은 하나 이상의 전원, 충전 시스템, 전력 실패 감지 회로(power failure detection circuit), 전력 변환기나 인버터, 전력 상태 표시자 또는 전력 생성, 관리, 분배를 위한 임의의 다른 컴포넌트들을 포함할 수 있다.The power circuit 11500 may supply power to all or some of the components of the terminal. For example, the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
통신 회로(11600)는 적어도 하나의 외부 포트를 이용하여 다른 컴퓨팅 장치와 통신을 가능하게 할 수 있다.The communication circuit 11600 may enable communication with another computing device using at least one external port.
또는 상술한 바와 같이 필요에 따라 통신 회로(11600)는 RF 회로를 포함하여 전자기 신호(electromagnetic signal)라고도 알려진 RF 신호를 송수신함으로써, 다른 컴퓨팅 장치와 통신을 가능하게 할 수도 있다.Alternatively, as described above, if necessary, the communication circuit 11600 may include an RF circuit to transmit and receive an RF signal, also known as an electromagnetic signal, to enable communication with other computing devices.
이러한 도 13의 실시예는, 컴퓨팅 장치(11000)의 일례일 뿐이고, 컴퓨팅 장치(11000)은 도 13에 도시된 일부 컴포넌트가 생략되거나, 도 13에 도시되지 않은 추가의 컴포넌트를 더 구비하거나, 2개 이상의 컴포넌트를 결합시키는 구성 또는 배치를 가질 수 있다. 예를 들어, 모바일 환경의 통신 단말을 위한 컴퓨팅 장치는 도 13에 도시된 컴포넌트들 외에도, 터치스크린이나 센서 등을 더 포함할 수도 있으며, 통신 회로(1160)에 다양한 통신방식(WiFi, 3G, LTE, Bluetooth, NFC, Zigbee 등)의 RF 통신을 위한 회로가 포함될 수도 있다. 컴퓨팅 장치(11000)에 포함 가능한 컴포넌트들은 하나 이상의 신호 처리 또는 어플리케이션에 특화된 집적 회로를 포함하는 하드웨어, 소프트웨어, 또는 하드웨어 및 소프트웨어 양자의 조합으로 구현될 수 있다.This embodiment of FIG. 13 is only an example of the computing device 11000 , and the computing device 11000 may omit some components shown in FIG. 13 , or further include additional components not shown in FIG. 13 , or 2 It may have a configuration or arrangement that combines two or more components. For example, a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 13 , and may include various communication methods (WiFi, 3G, LTE) in the communication circuit 1160 . , Bluetooth, NFC, Zigbee, etc.) may include a circuit for RF communication. Components that may be included in the computing device 11000 may be implemented in hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
본 발명의 실시예에 따른 방법들은 다양한 컴퓨팅 장치를 통하여 수행될 수 있는 프로그램 명령(instruction) 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 특히, 본 실시예에 따른 프로그램은 PC 기반의 프로그램 또는 모바일 단말 전용의 어플리케이션으로 구성될 수 있다. 본 발명이 적용되는 애플리케이션은 파일 배포 시스템이 제공하는 파일을 통해 이용자 단말에 설치될 수 있다. 일 예로, 파일 배포 시스템은 이용자 단말이기의 요청에 따라 상기 파일을 전송하는 파일 전송부(미도시)를 포함할 수 있다.Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium. In particular, the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal. The application to which the present invention is applied may be installed in the user terminal through a file provided by the file distribution system. As an example, the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request of the user terminal.
이상에서 설명된 장치는 하드웨어 구성요소, 소프트웨어 구성요소, 및/또는 하드웨어 구성요소 및 소프트웨어구성요소의 조합으로 구현될 수 있다. 예를 들어, 실시예들에서 설명된 장치 및 구성요소는, 예를 들어, 프로세서, 콘트롤러, ALU(arithmetic logic unit), 디지털 신호 프로세서(digital signal processor), 마이크로컴퓨터, FPGA(field programmable gate array), PLU(programmable logic unit), 마이크로프로세서, 또는 명령(instruction)을 실행하고 응답할 수 있는 다른 어떠한 장치와 같이, 하나 이상의 범용 컴퓨터 또는 특수 목적컴퓨터를 이용하여 구현될 수 있다. 처리 장치는 운영 체제(OS) 및 상기 운영 체제 상에서 수행되는 하나 이상의 소프트웨어 애플리케이션을 수행할 수 있다. 또한, 처리 장치는 소프트웨어의 실행에 응답하여, 데이터를 접근, 저장, 조작, 처리 및 생성할 수도 있다. 이해의 편의를 위하여, 처리 장치는 하나가 사용되는 것으로 설명된 경우도 있지만, 해당 기술분야에서 통상의 지식을 가진 자는, 처리 장치가 복수 개의 처리 요소(processing element) 및/또는 복수 유형의 처리 요소를 포함할 수 있음을 알 수 있다. 예를 들어, 처리 장치는 복수 개의 프로세서 또는 하나의 프로세서 및 하나의 콘트롤러를 포함할 수 있다. 또한, 병렬 프로세서(parallel processor)와 같은, 다른 처리 구성(processing configuration)도 가능하다.The device described above may be implemented as a hardware component, a software component, and/or a combination of the hardware component and the software component. For example, devices and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA). , a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For convenience of understanding, although one processing device is sometimes described as being used, one of ordinary skill in the art will recognize that the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that can include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
소프트웨어는 컴퓨터 프로그램(computer program), 코드(code), 명령(instruction), 또는 이들 중 하나 이상의 조합을 포함할 수 있으며, 원하는 대로 동작하도록 처리 장치를 구성하거나 독립적으로 또는 결합적으로 (collectively) 처리 장치를 명령할 수 있다. 소프트웨어 및/또는 데이터는, 처리 장치에 의하여 해석되거나 처리 장치에 명령 또는 데이터를 제공하기 위하여, 어떤 유형의 기계, 구성요소(component), 물리적 장치, 가상장치(virtual equipment), 컴퓨터 저장 매체 또는 장치, 또는 전송되는 신호 파(signal wave)에 영구적으로, 또는 일시적으로 구체화(embody)될 수 있다. 소프트웨어는 네트워크로 연결된 컴퓨팅 장치 상에 분산되어서, 분산된 방법으로 저장되거나 실행될 수도 있다. 소프트웨어 및 데이터는 하나 이상의 컴퓨터 판독 가능 기록 매체에 저장될 수 있다.Software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device. The software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave. The software may be distributed over networked computing devices, and stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.
실시예에 따른 방법은 다양한 컴퓨터 수단을 통하여 수행될 수 있는 프로그램 명령 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 상기 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 매체에 기록되는 프로그램 명령은 실시예를 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(magnetic media), CD-ROM, DVD와 같은 광기록 매체(optical media), 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함한다. 상기된 하드웨어 장치는 실시예의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다.The method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software. Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks. - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
이상과 같이 실시예들이 비록 한정된 실시예와 도면에 의해 설명되었으나, 해당 기술분야에서 통상의 지식을 가진 자라면 상기의 기재로부터 다양한 수정 및 변형이 가능하다. 예를 들어, 설명된 기술들이 설명된 방법과 다른 순서로 수행되거나, 및/또는 설명된 시스템, 구조, 장치, 회로 등의 구성요소들이 설명된 방법과 다른 형태로 결합 또는 조합되거나, 다른 구성요소 또는 균등물에 의하여 대치되거나 치환되더라도 적절한 결과가 달성될 수 있다.As described above, although the embodiments have been described with reference to the limited embodiments and drawings, various modifications and variations are possible from the above description by those skilled in the art. For example, the described techniques are performed in a different order than the described method, and/or the described components of the system, structure, apparatus, circuit, etc. are combined or combined in a different form than the described method, or other components Or substituted or substituted by equivalents may achieve an appropriate result.
그러므로, 다른 구현들, 다른 실시예들 및 특허청구범위와 균등한 것들도 후술하는 특허청구범위의 범위에 속한다.Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (8)

  1. HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템으로서,As an exotropia patient rehabilitation system comprising an HMD module and a service server,
    HMD모듈에서는 가상현실컨텐츠가 실행될 수 있고,In the HMD module, virtual reality content can be executed,
    상기 가상현실컨텐츠는,The virtual reality content,
    사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트를 시작위치에서 사용자측으로 이동시키는 트리거단계;a triggering step of moving the first object in a preset area of the central portion of the virtual reality screen from the start position to the user side according to the user's manipulation of the controller;
    사용자의 머리이동에 따른 HMD모듈의 방향조작에 따라 가상현실화면이 이동하는 타게팅단계;A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head;
    사용자의 컨트롤러 조작에 따라 상기 제1오브젝트를 가상현실화면에서 발사하는 릴리즈단계; 및a release step of emitting the first object from a virtual reality screen according to a user's controller manipulation; and
    상기 릴리즈단계에서 발사된 제1오브젝트가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트와의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계;를 포함하는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템.Exotropia including HMD module and service server; Patient Rehabilitation Training System.
  2. 청구항 1에 있어서,The method according to claim 1,
    상기 1 이상의 제2오브젝트는 각각의 기설정된 크기를 갖도록 설정되고,The one or more second objects are set to have respective preset sizes,
    상기 제2오브젝트의 좌표는 사용자의 좌표로부터 각각의 거리를 갖는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템.The coordinates of the second object have respective distances from the coordinates of the user, an exotropia patient rehabilitation system comprising an HMD module and a service server.
  3. 청구항 1에 있어서,The method according to claim 1,
    상기 HMD모듈은,The HMD module,
    상기 가상현실화면에서의 사용자의 시선의 좌표정보, 및 동공위치정보를 수집하고,Collecting coordinate information of the user's gaze on the virtual reality screen, and pupil position information,
    상기 트리거단계는,The trigger step is
    상기 제1오브젝트가 이동하는 동안의 사용자의 시선의 좌표정보 및 사용자의 동공위치정보에 기초하여 상기 제1오브젝트의 릴리즈레벨을 도출하고,Deriving the release level of the first object based on the coordinate information of the user's gaze and the user's pupil position information while the first object is moving,
    상기 릴리즈단계는,The release step is
    상기 트리거단계에서 도출된 상기 릴리즈레벨에 기초하여 가상현실화면에서의 상기 제1오브젝트의 발사세기 혹은 발사거리를 결정하는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템.Exotropia patient rehabilitation system comprising an HMD module and a service server, which determines the firing intensity or firing distance of the first object in the virtual reality screen based on the release level derived in the trigger step.
  4. 청구항 1에 있어서,The method according to claim 1,
    상기 HMD모듈은,The HMD module,
    상기 가상현실화면에서의 사용자의 동공위치정보를 수집하고,Collecting the user's pupil location information on the virtual reality screen,
    상기 트리거단계는,The trigger step is
    상기 사용자의 동공위치가 기설정된 기준에 부합하지 않는 경우에, 사용자측으로 이동된 상기 제1오브젝트의 위치를 상기 시작위치로 리셋시키는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템.Exotropia patient rehabilitation system comprising an HMD module and a service server for resetting the position of the first object moved to the user side to the starting position when the user's pupil position does not meet the preset criteria.
  5. 청구항 1에 있어서,The method according to claim 1,
    상기 HMD모듈은,The HMD module,
    상기 가상현실화면에서의 사용자의 시선의 좌표정보를 수집하고,Collecting the coordinate information of the user's gaze on the virtual reality screen,
    상기 타게팅단계는,The targeting step is
    사용자의 좌안의 시선의 좌표정보 및 우안의 시선의 좌표정보에 기초하여 기설정된 기준에 따라 릴리즈영역을 도출하고, 도출된 상기 릴리즈영역의 범위 이내에서 상기 제1오브젝트가 발사되는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템.Based on the coordinate information of the user's left eye gaze and the right eye gaze coordinate information, a release area is derived according to a preset criterion, and the first object is launched within the derived release area range, HMD module and service Exotropia rehabilitation system including a server.
  6. 청구항 1에 있어서,The method according to claim 1,
    상기 가상현실컨텐츠는,The virtual reality content,
    상기 트리거단계, 타게팅단계, 및 릴리즈단계에서의 사용자의 시선의 좌표정보에 기초한 시선히트맵을 생성하고, 생성된 시선히트맵을 상기 서비스서버에 전송하는 사시진단단계;를 더 포함하고,Further comprising; a strabismus diagnosis step of generating a gaze heat map based on the coordinate information of the user's gaze in the trigger step, the targeting step, and the release step, and transmitting the generated gaze heat map to the service server;
    상기 서비스서버는 학습시선히트맵 데이터에 의하여 학습된 진단모델에 의하여 수신한 상기 시선히트맵에 대한 사시진단정보를 도출하는, HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템.The service server is an HMD module and a service server for deriving strabismus diagnosis information for the gaze heat map received by the diagnostic model learned based on the learning gaze heat map data.
  7. HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템을 이용한 외사시환자 재활훈련 방법으로서,As a rehabilitation training method for an exotropia patient using an exotropia rehabilitation system including an HMD module and a service server,
    HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트를 시작위치에서 사용자측으로 이동시키는 트리거단계;a triggering step of moving, by the HMD module, a first object in a preset area of the central portion of the virtual reality screen from the start position to the user side according to the user's controller manipulation;
    HMD모듈에 의하여, 사용자의 머리이동에 따른 HMD모듈의 방향조작에 따라 가상현실화면이 이동하는 타케팅단계;A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head by the HMD module;
    HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트를 가상현실화면에서 발사하는 릴리즈단계; 및a release step of emitting the first object from the virtual reality screen according to the user's controller manipulation by the HMD module; and
    HMD모듈에 의하여, 상기 릴리즈단계에서 발사된 제1오브젝트가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트와의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계;를 포함하는, 외사시환자 재활훈련 방법.A score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen by the HMD module; Way.
  8. HMD모듈 및 서비스서버를 포함하는 외사시환자 재활훈련 시스템을 이용한 외사시환자 재활훈련 방법을 구현하기 위한, 컴퓨터-판독가능 매체로서,As a computer-readable medium for implementing a method for rehabilitation of an exotropia patient using an exotropia rehabilitation system including an HMD module and a service server,
    상기 컴퓨터-판독가능 매체는, 상기 외사시환자 재활훈련 시스템의 구성요소로 하여금 이하의 단계들을 수행하도록 하는 명령들을 저장하며, 단계들은:The computer-readable medium stores instructions for causing a component of the exotropia rehabilitation system to perform the following steps:
    HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 가상현실화면의 중앙부분의 기설정된 영역에서의 제1오브젝트를 시작위치에서 사용자측으로 이동시키는 트리거단계;a triggering step of moving, by the HMD module, a first object in a preset area of the central portion of the virtual reality screen from the start position to the user side according to the user's controller manipulation;
    HMD모듈에 의하여, 사용자의 머리이동에 따른 HMD모듈의 방향조작에 따라 가상현실화면이 이동하는 타케팅단계;A targeting step of moving the virtual reality screen according to the direction manipulation of the HMD module according to the movement of the user's head by the HMD module;
    HMD모듈에 의하여, 사용자의 컨트롤러 조작에 따라 상기 제1오브젝트를 가상현실화면에서 발사하는 릴리즈단계; 및a release step of emitting the first object from the virtual reality screen according to the user's controller manipulation by the HMD module; and
    HMD모듈에 의하여, 상기 릴리즈단계에서 발사된 제1오브젝트가 상기 가상현실화면에 존재하는 1 이상의 제2오브젝트와의 접촉여부를 판정하여 스코어를 산정하는 스코어산정단계;를 포함하는, 컴퓨터-판독가능매체.Computer-readable, including; a score calculation step of calculating a score by determining whether the first object fired in the release step is in contact with one or more second objects existing in the virtual reality screen by the HMD module media.
PCT/KR2020/016128 2020-02-11 2020-11-17 Virtual-reality system and method for rehabilitating exotropia patients on basis of artificial intelligence, and computer-readable medium WO2021162207A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0016552 2020-02-11
KR1020200016552A KR102120112B1 (en) 2020-02-11 2020-02-11 System, Method and Computer-readable Medium for Rehabilitation training with Virtual Reality for Patients with Exotropia Based on Artificial Intelligence

Publications (1)

Publication Number Publication Date
WO2021162207A1 true WO2021162207A1 (en) 2021-08-19

Family

ID=71081987

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/016128 WO2021162207A1 (en) 2020-02-11 2020-11-17 Virtual-reality system and method for rehabilitating exotropia patients on basis of artificial intelligence, and computer-readable medium

Country Status (2)

Country Link
KR (1) KR102120112B1 (en)
WO (1) WO2021162207A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102120112B1 (en) * 2020-02-11 2020-06-09 가천대학교 산학협력단 System, Method and Computer-readable Medium for Rehabilitation training with Virtual Reality for Patients with Exotropia Based on Artificial Intelligence
KR102406472B1 (en) * 2020-07-31 2022-06-07 전남대학교산학협력단 Simulation system for educating cross-eye based on virtual reality
CN112641610B (en) * 2020-12-21 2023-04-07 韩晓光 Amblyopia training method, device and system
KR102563365B1 (en) * 2021-05-11 2023-08-03 고려대학교 산학협력단 System and method for ambulatory monitoring of eye alignment in strabismus
KR102549616B1 (en) 2021-09-03 2023-06-29 재단법인 아산사회복지재단 Apparatus and method for providing eye movement and visual perception training based on virtual reality
KR102436681B1 (en) 2022-05-16 2022-08-26 주식회사 엠디에이 Neurological disease auxiliary diagnosis system using smart mirror
KR102460828B1 (en) 2022-05-16 2022-11-01 주식회사 엠디에이 Exercise rehabilitation system using smart mirror

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101276097B1 (en) * 2012-11-16 2013-06-18 (주) 피디케이리미티드 Simulator and simulation method for treating adhd
KR101966164B1 (en) * 2017-01-12 2019-04-05 고려대학교산학협력단 System and method for ophthalmolgic test using virtual reality
KR20190058169A (en) * 2017-11-21 2019-05-29 대한민국(국립재활원장) Treatment System and Method Based on Virtual-Reality
KR20190062023A (en) * 2017-11-28 2019-06-05 전남대학교산학협력단 System and method for diagnosing for strabismus, aparratus for acquiring gaze image, computer program
KR102120112B1 (en) * 2020-02-11 2020-06-09 가천대학교 산학협력단 System, Method and Computer-readable Medium for Rehabilitation training with Virtual Reality for Patients with Exotropia Based on Artificial Intelligence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101650706B1 (en) * 2014-10-07 2016-09-05 주식회사 자원메디칼 Device for wearable display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101276097B1 (en) * 2012-11-16 2013-06-18 (주) 피디케이리미티드 Simulator and simulation method for treating adhd
KR101966164B1 (en) * 2017-01-12 2019-04-05 고려대학교산학협력단 System and method for ophthalmolgic test using virtual reality
KR20190058169A (en) * 2017-11-21 2019-05-29 대한민국(국립재활원장) Treatment System and Method Based on Virtual-Reality
KR20190062023A (en) * 2017-11-28 2019-06-05 전남대학교산학협력단 System and method for diagnosing for strabismus, aparratus for acquiring gaze image, computer program
KR102120112B1 (en) * 2020-02-11 2020-06-09 가천대학교 산학협력단 System, Method and Computer-readable Medium for Rehabilitation training with Virtual Reality for Patients with Exotropia Based on Artificial Intelligence

Also Published As

Publication number Publication date
KR102120112B1 (en) 2020-06-09

Similar Documents

Publication Publication Date Title
WO2021162207A1 (en) Virtual-reality system and method for rehabilitating exotropia patients on basis of artificial intelligence, and computer-readable medium
WO2018080149A2 (en) Biometric-linked virtual reality-cognitive rehabilitation system
WO2018124809A1 (en) Wearable terminal and method for operating same
WO2017126910A1 (en) Display and electronic device including the same
WO2016182181A1 (en) Wearable device and method for providing feedback of wearable device
WO2017119788A1 (en) Head-mounted electronic device
WO2018074837A1 (en) Challenge reward server and operation method therefor
WO2013133583A1 (en) System and method for cognitive rehabilitation using tangible interaction
WO2020050636A1 (en) User intention-based gesture recognition method and apparatus
WO2016010368A1 (en) Wearable control device, and authentication and pairing method therefor
US10936060B2 (en) System and method for using gaze control to control electronic switches and machinery
WO2018143509A1 (en) Moving robot and control method therefor
WO2020054954A1 (en) Method and system for providing real-time virtual feedback
WO2020242087A1 (en) Electronic device and method for correcting biometric data on basis of distance between electronic device and user, measured using at least one sensor
Chen et al. Effect of temporality, physical activity and cognitive load on spatiotemporal vibrotactile pattern recognition
EP3952728A1 (en) Electronic device and method for providing information for stress relief by same
WO2024053989A1 (en) System and method for recommending rehabilitation exercise on basis of living environment detection by using digital image recognition
WO2017090815A1 (en) Apparatus and method for measuring joint range of motion
WO2021221490A1 (en) System and method for robust image-query understanding based on contextual features
WO2019143186A1 (en) Brain connectivity-based visual perception training device, method and program
WO2020235730A1 (en) Learning performance prediction method based on scan pattern of learner in video learning environment
WO2023101180A1 (en) System and method for mixed reality-based rehabilitation treatment
WO2023033545A1 (en) Device, method and program for searching and training preferred retinal locus of patient with visual field damage
WO2022019692A1 (en) Method, system, and non-transitory computer-readable recording medium for authoring animation
WO2022080549A1 (en) Motion tracking device of dual lidar sensor structure

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919109

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20919109

Country of ref document: EP

Kind code of ref document: A1