CN113101612A - Immersive upper limb rehabilitation system - Google Patents

Immersive upper limb rehabilitation system Download PDF

Info

Publication number
CN113101612A
CN113101612A CN202110375325.8A CN202110375325A CN113101612A CN 113101612 A CN113101612 A CN 113101612A CN 202110375325 A CN202110375325 A CN 202110375325A CN 113101612 A CN113101612 A CN 113101612A
Authority
CN
China
Prior art keywords
virtual
patient
dimensional model
training
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110375325.8A
Other languages
Chinese (zh)
Other versions
CN113101612B (en
Inventor
赵萍
葛兆杰
张涯婷
邓雪婷
关海威
葛巧德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202110375325.8A priority Critical patent/CN113101612B/en
Publication of CN113101612A publication Critical patent/CN113101612A/en
Application granted granted Critical
Publication of CN113101612B publication Critical patent/CN113101612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0274Stretching or bending or torsioning apparatus for exercising for the upper limbs
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/035Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously
    • A63B23/12Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for upper limbs or related muscles, e.g. chest, upper back or shoulder muscles
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • A63B24/0006Computerised comparison for qualitative assessment of motion sequences or the course of a movement
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0021Tracking a path or terminating locations
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/12Driving means
    • A61H2201/1207Driving means with electric or magnetic drive
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/06Arms
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • A63B2071/06363D visualisation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • A63B2071/0638Displaying moving images of recorded environment, e.g. virtual environment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0647Visualisation of executed movements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0658Position or arrangement of display
    • A63B2071/0661Position or arrangement of display arranged on the user
    • A63B2071/0666Position or arrangement of display arranged on the user worn on the head or face, e.g. combined with goggles or glasses

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Rehabilitation Therapy (AREA)
  • Pain & Pain Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention relates to an immersive upper limb rehabilitation system, which at least comprises: the robot body is used for assisting the arm of the patient to perform rehabilitation training and collecting training data in the training process; the active and passive control platform is used for sending a control instruction to the robot body and recording training data; a virtual reality platform for displaying a virtual training environment for providing rehabilitation training visual feedback to a patient, wherein the virtual reality platform is configured to: when entering an active training rehabilitation mode, displaying the virtual training environment constructed by the patient to the patient; the patient is guided in an adjustment action by synchronously or asynchronously establishing a first virtual three-dimensional model corresponding to a part of the real objects and a second virtual three-dimensional model of the virtual objects corresponding to the current virtual training environment, and/or by replicating at least one virtual three-dimensional model in the current virtual training environment under non-contact interaction between the patient and the virtual training environment.

Description

Immersive upper limb rehabilitation system
Technical Field
The invention relates to the technical field of medical rehabilitation instruments, in particular to an immersive upper limb rehabilitation system.
Background
When cerebrovascular diseases, severe brain trauma or other severe nervous system diseases occur, limb movement disorders of patients can occur, and further, much inconvenience is caused to the lives of the patients. The traditional rehabilitation method which only depends on therapists to carry out rehabilitation training undoubtedly restricts the improvement of the rehabilitation training efficiency and the improvement of the method, and the training effect is influenced by the level of the therapists. With the development of economy and the improvement of robot technology, it has become a mainstream trend to use an upper limb rehabilitation robot to assist patients in upper limb rehabilitation. The shoulder joint rehabilitation training robot is an emerging technology which is rapidly developed in recent years, and is a new application of the robot technology in the medical field. The upper limb rehabilitation robot can drive the upper limbs of the patient to do training motions based on the treatment task, and is beneficial to effective recovery of the upper limb strength of the patient and recovery of the upper limb movement function.
In the prior art, as disclosed in patent document CN104363982B, an upper limb rehabilitation robot system is proposed, which includes a computer and a rehabilitation robot; the computer is used for carrying out information interaction with the rehabilitation robot, recording training information and sending a control instruction to the rehabilitation robot; displaying the virtual training environment, providing visual feedback of rehabilitation training, and displaying a control interface and rehabilitation training information; the rehabilitation robot is used as a system execution mechanism, is connected with the computer and is used for receiving the computer control instruction, finishing motion control and tail end force output and sending sensor data to the computer. The interaction force between the patient and the handle is recorded by means of a multi-dimensional force/torque sensor. The system has an active training rehabilitation mode, a passive training rehabilitation mode and an active and passive training rehabilitation mode.
In the above technical scheme, especially in the active training mode, the patient actively moves the affected upper limb by fully depending on the understanding of the patient to the virtual training environment, and in fact, the patient movement in the active training mode often cannot meet the standard training requirement, and the rehabilitation effect cannot be ensured and the movement injury cannot be avoided. In addition, the acquired sensing data are acquired by the patient without reaching the standard training action and cannot be used for representing the motor ability of the patient actually, so that the rehabilitation training assessment based on the sensing data in the technical scheme has great deviation and cannot truly reflect the rehabilitation condition of the patient.
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the applicant has studied a great deal of literature and patents when making the present invention, but the disclosure is not limited thereto and the details and contents thereof are not listed in detail, it is by no means the present invention has these prior art features, but the present invention has all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the problems that the existing upper limb rehabilitation robot for driving the upper limb of a patient to do training actions cannot realize interaction with the patient, so that the rehabilitation effect and the patient experience are poor, in the upper limb rehabilitation robot system provided by the patent document with the publication number of CN104363982B in the prior art, in the technical scheme, particularly in an active training mode, the patient actively moves the upper limb on the affected side by understanding the virtual training environment, in fact, the patient movement in the mode often cannot meet the standard training requirement, the rehabilitation effect cannot be guaranteed, and the movement injury cannot be avoided. In addition, the acquired sensing data are acquired by the patient without reaching the standard training action and cannot be used for representing the motor ability of the patient actually, so that the rehabilitation training assessment based on the sensing data in the technical scheme has great deviation and cannot truly reflect the rehabilitation condition of the patient.
Aiming at the defects of the prior art, the invention provides an immersive upper limb rehabilitation system, which at least comprises the following components: the robot body is used for assisting the arm of the patient to perform rehabilitation training and collecting training data in the training process; the active and passive control platform is used for sending a control instruction to the robot body and recording training data; a virtual reality platform for displaying a virtual training environment for providing rehabilitation training visual feedback to a patient, wherein the virtual reality platform is configured to: when entering an active training rehabilitation mode, displaying the virtual training environment constructed by the patient to the patient; the patient is guided in an adjustment action by synchronously or asynchronously establishing a first virtual three-dimensional model corresponding to a part of the real objects and a second virtual three-dimensional model of the virtual objects corresponding to the current virtual training environment, and/or by replicating at least one virtual three-dimensional model in the current virtual training environment under non-contact interaction between the patient and the virtual training environment.
The synchronous or asynchronous establishment is relative to the time of establishing the virtual training environment, which means that the two virtual three-dimensional models can be simultaneously established and appear in the virtual training environment or respectively established and shown in the virtual training environment at different times. The replication projection may refer to synchronously replicating a virtual three-dimensional model established in a virtual training environment and projecting the virtual three-dimensional model onto a relative spatial position different from the replicated object in the virtual training environment, and the replicated virtual three-dimensional model may be kept synchronous with the replicated object.
According to a preferred embodiment, during the use of the rehabilitation system by the patient, the virtual reality platform monitors the movement of the patient and guides the patient to adjust the movement by introducing an overlap of follow-up positioning points and/or transparency changes into the virtual training environment when it is monitored that the virtual reality platform triggers at least one movement guiding condition.
According to a preferred embodiment, the virtual reality platform is further configured to: triggering a first action guiding condition when the action deviation degree of the movement of the patient is monitored to reach a first preset deviation degree threshold value; and guiding the patient to adjust the action by introducing the following positioning point into the virtual training environment.
According to a preferred embodiment, the virtual reality platform is further configured to: triggering a second action guiding condition when the number of times that the action deviation degree of the movement of the patient reaches a first preset deviation degree threshold value reaches a first preset number threshold value is monitored; and guiding the patient to adjust the action by overlapping transparency changes of the two virtual three-dimensional models.
According to a preferred embodiment, the two virtual three-dimensional models may refer to a second virtual three-dimensional model and a third virtual three-dimensional model, and the second virtual three-dimensional model may be a virtual object that performs a standard action corresponding to the current virtual training environment.
According to a preferred embodiment, the third virtual three-dimensional model may be obtained by performing a replica projection of the first virtual three-dimensional model, the third virtual three-dimensional model being different from the first virtual three-dimensional model in relative spatial position in the virtual training environment.
According to a preferred embodiment, in case of overlapping transparency changes of two virtual three-dimensional models, the patient can observe it simultaneously as a motion map at a first viewing angle and as a motion map at a third viewing angle.
According to a preferred embodiment, the virtual reality platform is further configured to: in the case that the first guidance mode is selected, when a second action guidance condition is triggered, the second and third virtual three-dimensional models are constructed from at least the training data, and the relative spatial positions of the first virtual three-dimensional model in the virtual training environment are simultaneously switched so that the second and third virtual three-dimensional models can be in relative spatial positions observable by the patient in the virtual training environment.
According to a preferred embodiment, the virtual reality platform is further configured to: in the case that the second guidance mode is selected, when a second action guidance condition is triggered, the relative spatial position of the first virtual three-dimensional model in the virtual training environment is maintained, and a newly constructed third virtual three-dimensional model is introduced to the virtual training environment at least in accordance with the training data.
According to a preferred embodiment, the virtual reality platform is further configured to: and triggering a third action guiding condition when the number of times that the action deviation degree of the patient movement reaches a second preset deviation degree threshold value exceeds a second preset number threshold value, and guiding the patient to adjust the action by performing transparency change overlapping on the two virtual three-dimensional models and introducing a following positioning point into the virtual training environment in a combined manner.
Drawings
FIG. 1 is a simplified block diagram of an active and passive control platform according to a preferred embodiment of the present invention;
FIG. 2 is a simplified block diagram of a virtual reality platform according to a preferred embodiment of the present invention;
FIG. 3 is a simplified overall schematic diagram of a robot body according to a preferred embodiment of the present invention;
FIG. 4 is a simplified partial schematic structural diagram of a robot body provided by the present invention;
FIG. 5 is a simplified assembly schematic of the motor mounting block provided by the present invention;
FIG. 6 is a simplified top view schematic diagram of a lever set according to the present invention;
FIG. 7 is a simplified overall structure of the arm support according to the present invention;
FIG. 8 is a simplified overall structure of the AB lever of the present invention;
FIG. 9 is a simplified overall structure diagram of the BC pole provided by the present invention;
FIG. 10 is a simplified overall structure of a CD rod according to the present invention;
FIG. 11 is a simplified overall structure diagram of a motor support plate according to the present invention;
FIG. 12 is a simplified overall structure diagram of the motor connecting plate provided in the present invention;
fig. 13 is a simplified overall structure diagram of the connecting plate for rod set according to the present invention.
List of reference numerals
1: the support rod 2: the display group 3: motor mounting block
4: linkage 5: installing a table plate 6: robot body
7: the drive system 8: the sensor group 9: single chip microcomputer
10: an upper computer 11: the encoder 12: medical interface display set
13: three-dimensional scene display group 14: multi-axis arm set 15: axle arm
301: the motor 302: the motor connecting plate 303: motor supporting plate
401: arm rest 402: BC pole 403: AB rod
404: CD rod 405: rod group connecting plate 406: bearing assembly
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings.
The application provides an immersive upper limb rehabilitation system, which mainly comprises a robot body 6, an active and passive control platform and a virtual reality platform.
The virtual reality platform comprises a display device, and the display device can be a conventional display, a VR head-mounted virtual reality device, an AR head-mounted virtual reality device or an MR head-mounted virtual reality device. The virtual training environment may be displayed to the patient using the display, as well as providing rehabilitation training visual feedback to the patient.
When the rehabilitation system enters an active training rehabilitation mode, the virtual reality platform displays the constructed virtual training environment to a patient through the display equipment, and establishes a first virtual three-dimensional model corresponding to a part of real objects and/or a second virtual three-dimensional model of virtual objects corresponding to the current virtual training environment.
A real object refers to an object that is present in reality, i.e. a patient using the rehabilitation system. The partial real object refers to a body part of the patient, which may be specifically an upper limb or an upper half of the patient, or the like. And establishing a first virtual three-dimensional model corresponding to part of the real object, namely the virtual three-dimensional model which is constructed in a simulation mode in the virtual training environment, wherein the first virtual three-dimensional model can immediately follow the motion of the upper limb of the patient, namely the motion of the upper limb of the patient is mapped on a display.
A virtual object refers to a virtually constructed object for guiding patient rehabilitation training. The virtual objects correspond to the current virtual training environment, i.e., may be changed to different types of objects accordingly with the virtual training environment. Preferably, the virtual object may be a virtual character in line with the patient who makes a rowing motion in a virtual training environment. The virtual object is in a position observable by the patient in the virtual training environment. For example, the virtual object may be a virtual crew sitting in front of the first virtual three-dimensional model corresponding to the patient, co-riding in a ship with the patient. The action performed by the virtual object should be a standard action corresponding to the current virtual training environment.
The existing upper limb rehabilitation robots mostly adopt virtual environments, and can map the motion of a patient to a virtual object to realize the immersion sense of the patient during rehabilitation training, however, in the technical scheme, the robot can only actively acquire the motion data of the patient and cannot feed back whether the motion of the patient is normal or not in real time, or can remind whether the motion of the patient is normal or not in real time in a text voice mode, the text voice is not enough for the patient to understand how to achieve the required motion, and often requires medical care to accompany and specifically guide on one side, so that the medical care workload is large and the medical care is required to be performed in a whole-course nursing rehabilitation process one by one. Based on this, the application proposes to adopt a second virtual three-dimensional model except for the first virtual three-dimensional model corresponding to the patient, so that the patient can observe correct standard actions intuitively without only paying attention to the mapping of the movement of the patient in a virtual environment, and the aim of effectively guiding the rehabilitation training of the patient is further fulfilled. In addition, two virtual three-dimensional models are simultaneously and synchronously displayed in a virtual environment, the patient can directly see the difference between the motion and the standard action, and even under the condition of no medical care guidance, the patient can actively adjust the action of the patient to better match the standard action, so that the rehabilitation training effect is greatly improved while the patient is fully immersed in the rehabilitation training.
Preferably, the virtual training environment may be a virtual space that changes scene as the patient makes a rowing motion using the upper limbs. The virtual training environment may be, for example, a boat on a river, submarine, air craft, hull on a track, and so forth.
The virtual reality platform includes at least two guidance modes, including at least a first guidance mode and a second guidance mode. In the first guiding mode, when the patient enters the virtual training environment, the first virtual three-dimensional model corresponding to the patient sits at the position of the bow, and at the moment, the second virtual three-dimensional model does not exist in the virtual training environment. In the second guiding mode, when the patient enters the virtual training environment, the first virtual three-dimensional model corresponding to the patient is seated at a position other than the bow, namely the midship or the tail, and the virtual training environment has the second virtual three-dimensional model.
Preferably, when the virtual reality platform is started in the first guidance mode, the virtual reality platform displays the virtual training environment constructed by the virtual reality platform to the patient through the display device, and establishes the first virtual three-dimensional model corresponding to part of the real object, and the current virtual training environment has no second virtual three-dimensional model.
Preferably, when the virtual reality platform is started in the second guidance mode, the virtual reality platform displays the virtual training environment constructed by the virtual reality platform to the patient through the display device, and establishes a first virtual three-dimensional model corresponding to a part of the real object and a second virtual three-dimensional model corresponding to the current virtual training environment.
The first guidance mode or the second guidance mode of the virtual reality platform can be chosen arbitrarily according to the actual situation of the patient, and the main difference between the two modes is that the patient is mainly located at which position on the ship hull in most of the time in the virtual training environment. Compared with the second guidance mode, the first guidance mode can enable the patient to obtain a wider visual field and better motion feeling, but the rehabilitation training guidance effect is relatively weak, and in the second guidance mode, although other second virtual three-dimensional models always exist in the visual field of the patient, the better rehabilitation training guidance effect can be obtained. Therefore, medical care can select a first guidance mode for patients with strong learning ability or good cognitive response and the like in the rehabilitation training process, and select a second guidance mode for patients with weak learning ability or certain obstacles in cognitive response and the like in the rehabilitation training process.
In the process that a patient uses the rehabilitation system, the virtual reality platform monitors the movement of the patient, when the movement of the patient is not consistent with the set movement, a first movement guiding condition is triggered, and the patient is guided to adjust the movement by introducing a following positioning point into the virtual training environment.
The discrepancy between the patient movement and the set movement referred to in the present application may mean that the movement deviation degree of the patient movement reaches a first preset deviation degree threshold value. The degree of motion deviation may be a quantified measure of how different patient motion differs from a standard motion, which may be calculated from an amplitude, angle, force, or speed of the patient's motion.
When entering an active training rehabilitation mode, displaying the virtual training environment constructed by the patient to the patient; the patient is guided in an adjustment action by synchronously or asynchronously establishing a first virtual three-dimensional model corresponding to a part of the real objects and a second virtual three-dimensional model of the virtual objects corresponding to the current virtual training environment, and/or by replicating at least one virtual three-dimensional model in the current virtual training environment under non-contact interaction between the patient and the virtual training environment.
The anchor point in the tracking anchor point may be a position point to which the upper limb should reach in the standard motion with respect to the current motion of the first virtual three-dimensional model, and the position point may be a position corresponding to a palm portion, a wrist portion, an elbow portion, and the like in the upper limb. Following in a following localization point may mean that the localization point may move accordingly as the patient's motion changes. The method comprises the steps that a complete oar pulling period comprises a oar lifting water entering stage, an oar pulling stage, an oar pressing and pushing starting stage, an oar pushing stage and other single-orientation actions, different following positioning points are correspondingly arranged aiming at different single-orientation actions, and the following positioning points change along with the change of a patient when the patient enters the next action from one action, so that the following positioning points in a display picture are limited and are specified definitely, and misleading can not be caused to the patient. Therefore, the patient can be guided to adjust the action timely and effectively under the condition that the action of the patient is not consistent occasionally. The tracking positioning point can have a meteoric trail extending to the upper limb of the patient for displaying a task action path indicating the movement of the patient, and the patient can clarify the following direction of the movement of the patient due to the directional characteristic of the meteoric trail.
And converting the second virtual three-dimensional model into a frame structure and selecting at least one structure point on the frame structure as a following positioning point. Under the condition that the virtual reality platform is in the first guide mode and only the first virtual three-dimensional model is established in the current virtual training environment, the following positioning points are introduced into the virtual training environment in a mode of establishing the second virtual three-dimensional model. The second virtual three-dimensional model with large data loading capacity and delayed response built by the virtual reality platform is not completely loaded into the virtual training environment, but is simplified into a following positioning point with smaller data processing capacity and faster response speed, and standard action amplitude or standard action path and the like are converted into visual information which can be intuitively observed and compared by a patient by utilizing the following positioning point. If the mode of directly introducing the virtual three-dimensional model is adopted, the virtual three-dimensional model is introduced once when the action of the patient is inconsistent, and no matter whether the correct standard action is displayed in the virtual three-dimensional environment or not, the patient is difficult to completely follow the correct standard action, namely the patient is easy to frequently generate so-called action non-specification, so that the system needs to repeatedly load and hide different prompt models, the data loading capacity is large, and the response delay is increased. Correspondingly, the data processing amount required by loading the following positioning points and the influence on the rehabilitation process of the patient are very small, so that the situation that the system needs to repeatedly load and hide different prompt models for many times due to the fact that the action of the patient is not standard frequently is avoided.
The interaction between the second virtual three-dimensional model and the first virtual three-dimensional model may be a process in which the motion parameters of the second virtual three-dimensional model are influenced by the motion parameters of the first virtual three-dimensional model, and the like.
The virtual scene implementation unit may regulate the motion phase and the motion speed performed by the second virtual three-dimensional model based on training data related to the patient's upper limb movement. The motion phase can be each phase in a single oar pulling period, the same motion phase is kept by regulating the first virtual three-dimensional model and the second virtual three-dimensional model, and the motion speed of the second virtual three-dimensional model is regulated and controlled, so that the condition that the standard motion is too fast or too slow relative to the adaptability of the patient is avoided, the difference between the motion of the patient and the standard motion can be greatly observed by the patient, and the posture of the upper limb can be quickly and effectively regulated. Under the regulation and control of the virtual scene realization unit, the motion time difference between the second virtual three-dimensional model and the first virtual three-dimensional model does not exceed two motion stages, and the motion speed of the second virtual three-dimensional model is higher than that of the first virtual three-dimensional model but does not exceed a preset speed threshold value, so that the patient can follow and the immersion feeling of the patient can be enhanced. And when the motion time difference between the second virtual three-dimensional model and the first virtual three-dimensional model reaches two motion phases, instructing the second virtual three-dimensional model to repeatedly show the standard motions corresponding to the two motion phases until the first virtual three-dimensional model completes the two motion phases. Repeated presentation may refer to the second virtual three-dimensional model repeating only the standard actions corresponding to the two motion phases.
The following positioning point can be obtained by copying and projecting the existing second virtual three-dimensional model to the virtual training environment. Under the condition that the virtual reality platform is in the second guide mode and the first virtual three-dimensional model and the second virtual three-dimensional model are established in the current virtual training environment, copying and projecting are carried out on the virtual training environment according to the existing second virtual three-dimensional model so as to introduce the following positioning points. The copying projection of the second virtual three-dimensional model is actually copying projection of a frame structure of the second virtual three-dimensional model, and at least one structure point on the frame structure is selected, so that the following positioning point is obtained.
Replicating a projection may refer to replicating the projection in a manner that establishes a synchronized relationship between the replicator and the replicator. And establishing a synchronous association relation between the tracking positioning point and the second virtual three-dimensional model while copying. Based on this, the following anchor point may maintain a dynamic correspondence between it and the second virtual three-dimensional model.
Replicating a projection may refer to replicating the projection in a manner that establishes a synchronous associative relationship between the replicator and the virtual training environment. And establishing a synchronous association relation between the tracking positioning point and the virtual training environment while projecting. Based on this, the tracking positioning point can keep the relative position in the virtual training environment, and guide the patient to move along the path meeting the rehabilitation training requirement.
In the process that a patient uses the rehabilitation system, the virtual reality platform monitors the movement of the patient, when the number of times that the movement of the patient is inconsistent with the set action reaches a first preset number threshold, a second action guiding condition is triggered, and the patient is guided to adjust the action by performing transparency change overlapping on the two virtual three-dimensional models.
The two virtual three-dimensional models may refer to a second virtual three-dimensional model and a third virtual three-dimensional model.
The second virtual three-dimensional model may always refer to a virtual object that performs a standard action corresponding to the current virtual training environment. The third virtual three-dimensional model may be obtained by a replica projection of the first virtual three-dimensional model. The third and the first virtual three-dimensional models are at different positions in the display.
In the case of overlapping transparency changes of two virtual three-dimensional models, the patient can observe them simultaneously as a motion map at a first viewing angle and as a motion map at a third viewing angle. Because the patient can observe its self motion condition from first visual angle and third visual angle simultaneously, the patient can adjust the action of oneself better more effectively to realize more effective rehabilitation training effect.
The transparency changes are overlapped, wherein the transparency can refer to the visual degrees corresponding to the two virtual three-dimensional models in the display interface respectively. The transparency change may refer to the transparency of the two virtual three-dimensional models in the display interface being not fixed but dynamically variable. Overlapping may refer to the two virtual three-dimensional models in the display interface being in the same relative spatial position in the virtual training environment. For example, the main human body trunks of the two virtual three-dimensional models are fused into a whole, and the two virtual three-dimensional models can be respectively and correspondingly displayed through different upper limb movements.
Under the overlapping setting of the transparency change, the virtual reality platform can selectively highlight a specific action which is not in accordance with the set action in the continuous motion by regulating and controlling the transparency change of the second virtual three-dimensional model and the third virtual three-dimensional model. Specifically, after triggering the second or third motion guidance condition, the patient may or may not be able to synchronously follow the virtual object performing the standard rowing motion, which may cause a certain time difference between the motions of the patient and the virtual object, and in any case, there is a problem that the posture of the upper limb of the patient is wrong or the upper limb of the patient is not stretched in place.
In contrast, in the virtual reality platform proposed in the present application, under the condition that the second or third motion guidance condition is triggered, when it is monitored that the patient training data conforms to the standard motion, the transparency of at least part of the second virtual three-dimensional model is increased. At the moment, the action of the patient meets the rehabilitation training requirement, the dependence on the second virtual three-dimensional model is weak, the visual degree of at least part of the second virtual three-dimensional model can be reduced based on the dependence, and unnecessary interference to the patient caused by the staggering of the two three-dimensional models is avoided.
Preferably, the transparency of the second virtual three-dimensional model corresponding to the virtual object is increased if the patient is in synchronization with the virtual object. Preferably, the transparency of the second virtual three-dimensional model is gradually increased in a direction facing away from the upper limb movement if the patient and the virtual object are not in perfect synchronization. The two virtual three-dimensional models in the synchronous state are overlapped or almost overlapped, and the two virtual three-dimensional models in the incomplete synchronous state have a certain time difference and are not completely overlapped with each other. The incomplete synchronization state is different from the action inconsistency, and both the action consistency and the action inconsistency exist in the synchronization state or the incomplete synchronization state.
When the fact that the training data of the patient do not accord with the standard motion is monitored, a first area is defined based on upper limb motion deviation between the second virtual three-dimensional model and the third virtual three-dimensional model, a second area is defined based on the first area and the motion direction of the upper limb, a third area is defined based on the first area and the second area, and the definition corresponding to the first area to the third area is sequentially decreased.
At least one of the first to third regions may be defined in an irregular shape.
The first region is delineated based on upper limb motion deviations between the second and third virtual three-dimensional models. According to the upper limb action deviation between the second virtual three-dimensional model and the third virtual three-dimensional model, a motion path/task action path which is required to be adjusted and corresponds to the patient can be obtained, the motion path can be completed by indicating at least one upper limb joint point of the patient to move, and the area where the at least one upper limb joint point corresponding to the motion path is located is divided into a first area. The upper limb joint points of the second virtual three-dimensional model and the third virtual three-dimensional model which correspond to each other are simultaneously reserved in the first area, so that a patient can directly know the upper limb joint points which need to be adjusted by observing the first area. The upper limb joint point may be, for example, at a wrist joint or at an elbow joint.
When the deviation of the upper limb movement corresponds to at least two movement paths, for example, the wrist joint needs to be adjusted to meet the standard movement of the forearm and the elbow joint needs to be adjusted to meet the standard movement of the forearm, the first area is preferably defined according to the elbow joint, and then the first area is re-defined according to the wrist joint. Can better adapt to the operation habit of the human body.
And corresponding the first preset shape to the area where the motion path is located, outwards expanding or inwards contracting the outer edge of the first preset shape to enable the first area to further contain upper limb joint points of the two virtual three-dimensional models, which correspond to each other, and determining the first area based on the expanded outer edge, so that the required contents are all contained, and unnecessary other picture contents are reduced.
The first predetermined shape may be a preset shape, for example, a regular circle, and the first predetermined shape may be an irregular circle adjusted by counting and analyzing the shape of the first region defined each time, so as to more closely match the division of the first region. The first predetermined shape may be a shape corresponding to a degree of motion deviation of the upper limb determined from a number of preset shapes.
A second region is delineated based on the first region and the upper limb motion orientation. In order to ensure that the patient can continuously perform rehabilitation actions while adjusting the posture, the outer edge of the first area is used as a second preset shape of the second area, the outer edge of the second preset shape is outwards expanded so that the second area also contains the area where the upper limb moves towards the corresponding upper arm or lower arm, and the second area is determined based on the expanded outer edge.
The outer edge of the second region serves as a third predetermined shape of the third region, and the outer edge of the third predetermined shape is flared outwardly to delimit the third region. The third area is not limited to its desired delineation.
The definition corresponding to the first to third areas decreases in sequence. The reduction of sharpness may be achieved by increasing the ambiguity of the corresponding region.
The training data may be different from the standard motion, and may be a case where the training data of the current patient is deviated from the standard motion corresponding to the position of the current patient. The training data may be different from the standard motion, or may be a case where the training data of the patient at a certain time in the virtual training environment is different from the execution motion of the second virtual three-dimensional model. Based on this, avoid the system to appear because patient's velocity of motion is slower, fail to follow up virtual object and lead to being reminded frequently the condition that the action has the mistake, the patient can be based on oneself physical stamina and experience autonomic regulation velocity of motion, is favorable to promoting experience and feels.
The second virtual three-dimensional model is executed at a preset speed in such a way that it always remains in the same motion phase as the first virtual three-dimensional model. The action speed of the second virtual three-dimensional model is often faster than that of the patient, so that standard actions to be executed can be effectively displayed to the user, the standard actions and the standard actions are always kept in the same motion phase, the action time difference between the standard actions and the motion phase is limited, and the patient can better finish follow-up exercise one by one.
Often, a patient is prompted about a certain whole continuous motion in the prior art, for example, after the patient completes a paddle pulling period, the patient is prompted about that the completed motion of the patient does not meet a standard or only that the motion amplitude of the patient is insufficient, and for the patient, the patient cannot specifically know which motion in the paddle pulling period is problematic, and can only perform the next paddle pulling period according to the understanding of the patient on the prompt, which is not beneficial to the rehabilitation of the patient. In contrast, the rehabilitation system provided by the application adopts the transparency change overlapping setting, the motion prompt can be embodied into single actions such as a paddle lifting and entering stage, a paddle pulling stage, a paddle pressing and pushing starting stage, a paddle pushing stage and the like in a paddle pulling period, so that a patient can clearly know the actions of the patient with differences, and the differences between the actions of the patient and the standard actions are visually presented to the patient end through the transparency change overlapping setting, so that the patient can adjust the actions of the patient in a quantifiable manner.
The virtual reality platform may manipulate the transparency change overlay based on non-contact interaction between the patient and the virtual training environment. The non-contact interactive operation refers to a process in which the patient does not contact the display screen but virtually maps the corresponding motion it makes according to the picture on the display screen by means of the sensors carried on the robot body 6. Non-contact interactive operation may refer to a course of motion, which may refer to a single paddle cycle, rather than a single action.
If the virtual reality platform is started in the first guiding mode, when a second action guiding condition is triggered, at least a second virtual three-dimensional model and a third virtual three-dimensional model are built according to the training data, and the relative spatial positions of the first virtual three-dimensional model in the virtual training environment are switched simultaneously, so that the second virtual three-dimensional model and the third virtual three-dimensional model can be located at the relative spatial positions which can be observed by the patient in the virtual training environment.
Switching the relative spatial position of the first virtual three-dimensional model in the virtual training environment may refer to: in the first guiding mode, the first virtual three-dimensional model corresponding to the patient is in the bow position, and after the relative spatial position is switched, the first virtual three-dimensional model is converted into a position other than the bow, namely the middle or the stern. The relative spatial position of the first virtual three-dimensional model in the virtual training environment is left out, so that the first virtual three-dimensional model is used for building a second virtual three-dimensional model and a third virtual three-dimensional model. In the upper limb rehabilitation system provided by the prior art, a standard motion demonstration video is partially and directly inserted into the current virtual training environment, a patient can only pause to watch the video, the rehabilitation of the patient is interrupted, the use feeling of the patient is directly influenced, the patient can only imitate the standard motion from a sense organ and cannot determine whether the motion meets the requirement, and further, the training is interrupted and the standard motion demonstration video is inserted for many times, so that the rehabilitation training is seriously influenced.
If the virtual reality platform is started in the second guiding mode, when a second action guiding condition is triggered, the relative spatial position of the first virtual three-dimensional model in the virtual training environment is maintained, and a newly constructed third virtual three-dimensional model is introduced into the virtual training environment at least according to the training data.
Maintaining the relative spatial position of the first virtual three-dimensional model in the virtual training environment may refer to: in the second guiding mode, the first virtual three-dimensional model corresponding to the patient is positioned in the middle of the ship or at the stern, the current position of the first virtual three-dimensional model does not need to be changed, and the newly-built third virtual three-dimensional model is directly introduced. Because the third virtual three-dimensional model is formed by copying and projecting according to the first virtual three-dimensional model, the model itself can be obtained without processing data, so that the problem of unsmooth and unsmooth picture due to excessive burden on system operation is avoided, and the patient can observe the motion condition of the patient from the first visual angle and the third visual angle at the same time, thereby being beneficial to improving the rehabilitation training effect.
When a patient uses the rehabilitation system, the virtual reality platform monitors the movement of the patient, when the times that the movement deviation degree of the movement of the patient reaches a second preset deviation degree threshold value exceeds a second preset time threshold value, a third movement guiding condition is triggered, transparency change overlapping is carried out on the two virtual three-dimensional models, and a following positioning point is introduced into a virtual training environment in a combined mode, so that the patient is guided to adjust the movement. The two virtual three-dimensional models are overlapped in transparency change and the following positioning points are introduced, so that for a patient, the action required to be adjusted and how to adjust the action can be better defined, and the rehabilitation training effect can be further improved.
If the virtual reality platform is started in the first guiding mode, when a third action guiding condition is triggered, a second virtual three-dimensional model is built at least according to training data, the relative spatial view angle of the first virtual three-dimensional model in the virtual training environment is switched at the same time, and the patient is guided to adjust the action by performing transparency change overlapping on the first virtual three-dimensional model and the second virtual three-dimensional model and introducing a following positioning point into the virtual training environment in a combined manner. The relative spatial perspective may refer to a first perspective or a third perspective observed by the patient, and here, switching the relative spatial perspective may refer to switching the first perspective of the patient to the third perspective by rotating the first virtual three-dimensional model to view in the virtual training environment. For example, the ship body in the virtual training environment and the three-dimensional model personnel on the ship are turned to the side, preferably to the side corresponding to the affected upper limb of the patient, so that the motion condition of the affected upper limb can be better observed. Under the condition that the third action guiding condition is triggered, the first visual angle is cancelled and only the third visual angle is reserved in the virtual training environment, and the patient can visually observe the difference between the affected upper limb and the standard action. And the standard degree of the motion trail of the patient can be enhanced by combining the following positioning points. The following positioning points comprise the highest point and the lowest point which are required to be reached by the single-direction movement, and also comprise a plurality of positioning points in the movement track of the single-direction movement, so that the patient can control the effective bending angle or the effective stretching angle of the upper limb during movement, and a better rehabilitation training effect is realized.
If the virtual reality platform is started in the second guiding mode, when a third action guiding condition is triggered, the relative space view angle of the first virtual three-dimensional model in the virtual training environment is switched, and the patient is guided to adjust the action by performing transparency change overlapping on the first virtual three-dimensional model and the second virtual three-dimensional model and introducing a following positioning point into the virtual training environment in a combined manner. The first and second virtual three-dimensional models are already constructed in the second guidance mode, and based on this, when the third action guidance condition is triggered, the first and second virtual three-dimensional models that have been constructed can be overlapped in transparency change.
The robot body 6 is used for assisting the arm of the patient to carry out rehabilitation training and collecting training data in the training process. The training data mainly refers to data corresponding to the motor 301 and the sensors included in the robot body 6 during the rehabilitation training process. Preferably, the robot body 6 may be an existing upper limb rehabilitation device. The upper limb rehabilitation device may for example be an indoor rowing machine.
As a preferred embodiment, the robot body 6 may be a robot based on motion mapping and virtual reality proposed by a trajectory curve fitted according to the motion of the upper limbs. Through the cooperation of different motor 301 control strategies with the connecting rod structure that this application provided, this robot 6 can help patient's upper limbs to carry out the motion, accomplishes the rehabilitation training that owner combines passively.
The robot body 6 mainly includes a motor mounting block 3 and a linkage 4.
The linkage 4 is fixedly arranged on the mounting table 5 in an adjustable manner. The linkage 4 is fixedly arranged on the mounting table 5 by means of at least one support rod 1. Each support bar 1 is mounted upright on a mounting table 5, and a linkage 4 is slidably connected to the body of the support bar 1. The operator can adjust the height of the linkage 4 on the mounting table 5 up and down to better accommodate different patients.
The linkage 4 includes an AB lever 403, a BC lever 402, and a CD lever 404 rotatably connected to each other in this order. An arm support 401 is fixedly assembled on the BC pole 402, and the arm support 401 is used for placing the lower arm of the upper limb of the patient. The shape of the arm rest 401 is adapted to the shape of the human forearm, which resembles an elongated U-shaped structure.
The axis of rotation of the AB lever 403 about the BC lever 402 is coplanar with the axis of rotation of the CD lever 404 about the BC lever 402. Whereby the robot can support multi-pose motion of the patient's forearm in the plane of the BC rod 402.
A motor mounting block 3 is provided on the AB lever 403 at the end corresponding to the fulcrum a. The output end of the motor 301 in the motor mounting block 3 is connected to the end of the AB lever 403 corresponding to the a fulcrum. That is, the control of the rotation of the AB lever 403 can be realized by the control motor 301, and the movement of the forearm of the patient on the BC lever 402 can be synchronously driven due to the linkage relationship among the AB lever 403, the BC lever 402 and the CD lever 404.
The motor mounting block 3 is stably assembled above the mounting table 5 through the multi-axis arm set 14. One end of the multi-axis arm set 14 is slidably connected with the supporting rod 1 to realize the height adjustment of the connecting rod set 4 on the installation table 5. The other end of the multi-axis arm group 14 is connected with a motor supporting plate 303 in the motor mounting block 3 to realize stable support of the motor mounting block 3.
The multi-axis arm set 14 comprises at least one rotating shaft and at least one axis arm 15 which are sequentially and rotatably connected with each other. Two adjacent shaft arms 15 can be rotatably connected with each other through a rotating shaft arranged at the end part of the rod body. A rotating shaft connecting block is arranged on the supporting rod 1 in a sliding mode, a rotating shaft connecting block is arranged below the motor supporting plate 303, and the two shaft arms 15 located at two ends of the multi-shaft arm group 14 are respectively connected with the rotating shaft connecting block in a rotating mode. The shaft arms 15 are parallel to each other about the shaft center lines that rotate relative to each other.
Two supporting rods 1 are arranged on the mounting table plate 5, and the motor mounting block 3 or the AB rod 403 and the CD rod 404 are respectively connected to one supporting rod 1 through a multi-axis arm group 14, so that the supporting rods 1 can adjustably support the connecting rod group 4.
The weight of the motor 301 in the motor mounting block 3 is mainly transferred to the supporting rod 1 through the multi-axis arm group 14, and the movement of the upper limb of the patient is not affected by the weight of the motor 301, so that the movement condition of the upper limb of the patient can be more accurately evaluated, and the rehabilitation training effect is guaranteed.
A bearing 406 with a flange and a locking screw is adopted at the D pivot of the CD rod 404 to connect the inner shaft and the outer shaft, so that the transmission is simple and the structure is stable.
Only one motor installation block 3 is arranged in the robot body 6, fitting of the motion track of the upper limb can be accurately completed by running one motor 301, and the robot is simple and convenient to operate, low in cost and suitable for popularization.
A specific mounting manner at the motor mounting block 3 is explained as follows. The motor support plate 303 is mounted on the positioning base through at least one positioning hole, for example, 8 positioning holes. The positioning seat is arranged on one end part of the multi-axis arm set 14. The motor 301 is positioned and installed through at least one through hole, for example, 4 through holes, on the motor connecting plate 302. The motor 301 is then placed on the motor tray 303. Finally, the motor supporting plate 303 is connected with the motor connecting plate 302 through at least one long hole, for example 2 long holes, so that the installation and positioning of the motor 301 part are completed.
A specific mounting manner at the link group 4 is explained as follows. The protruding shaft at the B-end of the AB lever 403 extends into the inner ring of the bearing 406 with flange and locking screw and is fastened by two locking screws. The outer ring flange of the same bearing 406 is connected to the B-end of the BC rod 402 by at least one set of bolt and nut, for example, 3 sets of bolt and nut. The C-end of the BC-rod 402 is connected to the outer ring flange of the other bearing 406 in the same manner. The inner race of bearing 406 is then secured to the protruding shaft at the end of CD bar 404C by a locking screw. The D-end of the CD bar 404 is flanged to the outer race of the bearing 406 with a flange and locking screw. The inner race of the bearing 406 is secured to the protruding shaft on the rod set web 405. The arm rest 401 is mounted on the BC pole 402 by at least one set of bolt nuts, for example 2 sets of bolt nuts. Two sections of elastic bands can be sewn at two groups of square holes on the arm support 401 for auxiliary fixation.
The connection between the linkage 4 and the motor mounting block 3 is the key connection between the output shaft of the motor 301 and the end hole of the AB rod 403A. The linkage 4 is connected to the support rod 1 through at least one mounting positioning hole, for example, 8 mounting positioning holes, in the linkage connecting plate 405.
When in use, the motor 301 is driven to rotate by 1 48V power supply, 1 pulse controller and 1 driver. Through the key connection, the motor 301 output shaft transmits motion to the AB lever 403, and the AB lever 403 completes complete circular motion as a crank. The BC pole 402 and the CD pole 404 are driven to move along a certain track synchronously. The trajectory at the arm rest 401 is a fit trajectory for the desired upper limb movement.
As a preferred embodiment, the main structural dimensions and the main mounting dimensions of the robot body 6 proposed in the present application will be described below. The center distance between the two ends of the AB lever 403 is 129.60 mm. The center distance between both ends of the BC rod 402 was 187.00 mm. The center distance between the two ends of the CD bar 404 is 313.80 mm. The arm rest 401 has a length of 150mm, a major diameter of 90mm and a minor diameter of 70 mm. The diameter of the inner ring of the bearing 406 with the flange and the locking screw is 22mm, and the connection diameter of the flange of the outer ring is 60 mm. The midpoint of the end of the AB lever 403A is located 120.32mm vertically and 290.85mm horizontally from the center of the end of the CD lever 404D. The horizontal inclination angle of the arm rest 401 connected to the BC-bar 402 is 45.5 °. The vertical height of the middle point of the D end of the CD rod 404 from the desktop of the installation desk 5 is 165 mm.
The active and passive control platform is used for sending control instructions to the robot body 6 and recording training data. The active and passive control platform mainly comprises a driving system 7, a single chip microcomputer 9, a sensor group 8, an encoder 11, an upper computer 10 and a display group 2.
At least two modes are arranged in the active and passive control platform, and the active and passive control platform at least comprises a passive rehabilitation mode and an active rehabilitation mode. In the passive rehabilitation mode, according to a preset passive rehabilitation scheme and a set angular speed value in the passive rehabilitation scheme, the controller sends a motor 301 pulse instruction to the driver, the driver outputs a set pulse to drive the stepping motor 301 to rotate at a certain angular speed and drive the connecting rod group 4 to move, and the arm of the patient stretches into the arm support 401 and moves along with the connecting rod group 4 to complete the rehabilitation process. The passive rehabilitation scheme may refer to a preset correspondence between angular velocity and time.
The sensor group 8 may include at least one of an angle sensor, a force sensor, and a speed sensor provided on the arm rest 401 of the robot body 6. The sensor group 8 can detect the pressure between the upper limb of the patient and the arm support 401, the movement track of the upper limb of the patient, the movement speed of the upper limb of the patient and the like, and transmit data to the single chip microcomputer 9. The pressure value obtained by conversion of the singlechip 9 is transmitted to the upper computer 10 through a USB serial port, and the upper computer 10 displays the pressure value, the upper limb rotating speed and other data on a medical interface in real time.
The display set 2 may include a set of three-dimensional scene displays 13 and a set of medical interface displays 12. The virtual reality platform is respectively connected with the three-dimensional scene display group 13 and the medical interface display group 12. The display group 2 can be stably adjustably mounted on the mounting table 5 by means of the support bar 1 provided on the mounting table 5 and at least one multi-axis arm 14. The three-dimensional scene display group 13 is used for displaying the selected virtual scene for the patient using the arm support 401 to watch during upper limb rehabilitation, and guiding the patient to perform standard and effective rehabilitation training. The medical interface display 12 is used to display the data collected by the sensors and the data calculated and analyzed by the sensors disposed in the linkage 4, so that the medical staff can more clearly and clearly determine the rehabilitation training condition of the patient.
As a preferred embodiment, the three-dimensional scene display group 13 may be a display which is arranged facing the patient side and can show a three-dimensional scene picture, and the virtual reality platform may show a three-dimensional scene picture which can change along with the movement of the patient on the display. Preferably, the three-dimensional scene display group 13 may be a head-mounted display device employing VR technology, AR technology or MR technology. VR technology (Virtual Reality), generally referred to as Virtual Reality technology, uses a VR head-mounted display device to block the human vision and hearing from the outside world and to guide the patient to create a sensation of being in a Virtual environment. VR head mounted display device is more immersive than conventional display and is more scene simulation effect better. In addition to a pure virtual digital picture technology such as VR, a virtual digital picture + naked eye reality technology of ar (augmented reality), or a digitized reality + virtual digital picture technology of mr (media reality) may be adopted.
In the active rehabilitation mode, mainly the arm of the patient actively moves, and the linkage 4 associated with the arm support 401 is driven by the upper limb of the patient to operate. The pressure sensor and the encoder 11 can detect training data such as the pressure value of the upper limb and the angular velocity value of the upper limb driving the linkage 4 to move. The angular speed of the upper limb rotation is obtained by converting counting pulses obtained by the encoder 11 in unit time through the singlechip 9 and is displayed on a medical interface. The involvement of the active motor will strengthen the central nervous system and improve the rehabilitation effect of the patient.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents. The present description contains several inventive concepts, such as "preferably", "according to a preferred embodiment" or "optionally", each indicating that the respective paragraph discloses a separate concept, the applicant reserves the right to submit divisional applications according to each inventive concept.

Claims (10)

1. An immersive upper limb rehabilitation system comprising at least:
the robot body is used for assisting the arm of the patient to perform rehabilitation training and collecting training data in the training process;
the active and passive control platform is used for sending a control instruction to the robot body and recording training data;
a virtual reality platform for displaying a virtual training environment and providing rehabilitation training visual feedback for a patient,
wherein the virtual reality platform is configured to:
when entering an active training rehabilitation mode, displaying the virtual training environment constructed by the patient to the patient;
by synchronously or asynchronously building a first virtual three-dimensional model corresponding to a part of the real object and a second virtual three-dimensional model of the virtual object corresponding to the current virtual training environment,
and/or
By means of the at least one virtual three-dimensional model replicating projections in the current virtual training environment under non-contact interaction between the patient and the virtual training environment,
guiding the patient to adjust the motion.
2. The system of claim 1, wherein the virtual reality platform monitors patient motion during use of the rehabilitation system and guides the patient to adjust motion by introducing an overlay that follows positioning points and/or transparency changes into the virtual training environment when it is monitored that it triggers at least one motion guidance condition.
3. The system of claim 2, wherein the virtual reality platform is further configured to:
triggering a first action guiding condition when the action deviation degree of the movement of the patient is monitored to reach a first preset deviation degree threshold value;
and guiding the patient to adjust the action by introducing the following positioning point into the virtual training environment.
4. The system of claim 3, wherein the virtual reality platform is further configured to:
triggering a second action guiding condition when the number of times that the action deviation degree of the movement of the patient reaches a first preset deviation degree threshold value reaches a first preset number threshold value is monitored;
and guiding the patient to adjust the action by overlapping transparency changes of the two virtual three-dimensional models.
5. The system of claim 4, wherein the two virtual three-dimensional models are a second virtual three-dimensional model and a third virtual three-dimensional model, and the second virtual three-dimensional model is a virtual object performing a standard action corresponding to the current virtual training environment.
6. The system of claim 5, wherein the third virtual three-dimensional model is obtained by performing a replica projection of the first virtual three-dimensional model, and the third virtual three-dimensional model has a different relative spatial position in the virtual training environment than the first virtual three-dimensional model.
7. The system of claim 6, wherein the patient can view both as a motion map at a first perspective and as a motion map at a third perspective with the two virtual three-dimensional models overlaid with varying degrees of transparency.
8. The system of claim 7, wherein the virtual reality platform is further configured to:
in the case that the first guidance mode is selected, when a second action guidance condition is triggered, the second and third virtual three-dimensional models are constructed from at least the training data, and the relative spatial positions of the first virtual three-dimensional model in the virtual training environment are simultaneously switched so that the second and third virtual three-dimensional models can be in relative spatial positions observable by the patient in the virtual training environment.
9. The system of claim 8, wherein the virtual reality platform is further configured to:
in the case that the second guidance mode is selected, when a second action guidance condition is triggered, the relative spatial position of the first virtual three-dimensional model in the virtual training environment is maintained, and a newly constructed third virtual three-dimensional model is introduced to the virtual training environment at least in accordance with the training data.
10. The system of any one of claims 1 to 9, wherein the virtual reality platform is further configured to:
and triggering a third action guiding condition when the number of times that the action deviation degree of the patient movement reaches a second preset deviation degree threshold value exceeds a second preset number threshold value, and guiding the patient to adjust the action by performing transparency change overlapping on the two virtual three-dimensional models and introducing a following positioning point into the virtual training environment in a combined manner.
CN202110375325.8A 2021-04-06 2021-04-06 Immersive upper limb rehabilitation system Active CN113101612B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110375325.8A CN113101612B (en) 2021-04-06 2021-04-06 Immersive upper limb rehabilitation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110375325.8A CN113101612B (en) 2021-04-06 2021-04-06 Immersive upper limb rehabilitation system

Publications (2)

Publication Number Publication Date
CN113101612A true CN113101612A (en) 2021-07-13
CN113101612B CN113101612B (en) 2023-01-10

Family

ID=76714513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110375325.8A Active CN113101612B (en) 2021-04-06 2021-04-06 Immersive upper limb rehabilitation system

Country Status (1)

Country Link
CN (1) CN113101612B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113713333A (en) * 2021-08-25 2021-11-30 西安交通大学 Dynamic virtual induction method and system for lower limb rehabilitation full training process

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317196A (en) * 2014-09-29 2015-01-28 华南理工大学 Virtual reality-based upper limb rehabilitation training robot control method
CN106110627A (en) * 2016-06-20 2016-11-16 曲大方 Physical culture and Wushu action correction equipment and method
CN106358024A (en) * 2016-11-03 2017-01-25 京东方科技集团股份有限公司 Stroke monitoring system and stroke monitoring method
CN106527689A (en) * 2016-10-13 2017-03-22 广州视源电子科技股份有限公司 User interface interaction method and system for virtual reality system
CN107203745A (en) * 2017-05-11 2017-09-26 天津大学 A kind of across visual angle action identification method based on cross-domain study
KR20200022078A (en) * 2018-08-22 2020-03-03 장준영 Virtual Real Medical Training System
CN111760261A (en) * 2020-07-23 2020-10-13 重庆邮电大学 Sports optimization training system and method based on virtual reality technology
WO2021059669A1 (en) * 2019-09-25 2021-04-01 ソニー株式会社 Information processing device, video generation method, and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317196A (en) * 2014-09-29 2015-01-28 华南理工大学 Virtual reality-based upper limb rehabilitation training robot control method
CN106110627A (en) * 2016-06-20 2016-11-16 曲大方 Physical culture and Wushu action correction equipment and method
CN106527689A (en) * 2016-10-13 2017-03-22 广州视源电子科技股份有限公司 User interface interaction method and system for virtual reality system
CN106358024A (en) * 2016-11-03 2017-01-25 京东方科技集团股份有限公司 Stroke monitoring system and stroke monitoring method
CN107203745A (en) * 2017-05-11 2017-09-26 天津大学 A kind of across visual angle action identification method based on cross-domain study
KR20200022078A (en) * 2018-08-22 2020-03-03 장준영 Virtual Real Medical Training System
WO2021059669A1 (en) * 2019-09-25 2021-04-01 ソニー株式会社 Information processing device, video generation method, and program
CN111760261A (en) * 2020-07-23 2020-10-13 重庆邮电大学 Sports optimization training system and method based on virtual reality technology

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113713333A (en) * 2021-08-25 2021-11-30 西安交通大学 Dynamic virtual induction method and system for lower limb rehabilitation full training process

Also Published As

Publication number Publication date
CN113101612B (en) 2023-01-10

Similar Documents

Publication Publication Date Title
CN113101137B (en) Upper limb rehabilitation robot based on motion mapping and virtual reality
US10417931B2 (en) Rehabilitation assistance device and program for controlling rehabilitation assistance device
House et al. The VoiceBot: a voice controlled robot arm
US8257284B2 (en) Training device for muscle activation patterns
KR20220116237A (en) smart treadmill
Ruffaldi et al. Vibrotactile perception assessment for a rowing training system
CN113101612B (en) Immersive upper limb rehabilitation system
JP3570208B2 (en) Exercise analyzer and exercise assist device
JP2002127058A (en) Training robot, training robot system and training robot control method
JPH11198075A (en) Behavior support system
CN109882702A (en) A kind of intelligent follow-up adjusting display bracket
CN102551999A (en) Eye muscle exercise device and method
JPH11513157A (en) Interactive navigation device for virtual environment
Chen et al. Application of wearable device HTC VIVE in upper limb rehabilitation training
RU2646324C2 (en) Method of diving into virtual reality, suspension and exo-skelet, applied for its implementation
CN113679568A (en) Robot-assisted multi-mode mirror image rehabilitation training scoring system for upper limbs of stroke patients
JP2003199799A (en) Limb driving device for recovering function of walking
CN114767464B (en) Multi-mode hand rehabilitation system and method based on monocular vision guidance
CN207506748U (en) For the equipment of the autonomous rehabilitation training of upper limb unilateral side hemiplegic patient
JP3190026B1 (en) Humanoid robot experience presentation device and master-slave control device
El Makssoud et al. Dynamic control of a moving platform using the CAREN system to optimize walking invirtual reality environments
JPH08141026A (en) Walk training device
Chang et al. Bio-inspired gaze-driven robotic neck brace
US20190184574A1 (en) Systems and methods for automated rehabilitation
CN112827153A (en) Active self-adaptive system for human body function training and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhao Ping

Inventor after: Ge Zhaojie

Inventor after: Zhang Yating

Inventor after: Deng Xueting

Inventor after: Guan Haiwei

Inventor after: Ge Qiaode

Inventor after: Zhang Ru

Inventor after: Cai Mei

Inventor after: Wang Zhaowei

Inventor before: Zhao Ping

Inventor before: Ge Zhaojie

Inventor before: Zhang Yating

Inventor before: Deng Xueting

Inventor before: Guan Haiwei

Inventor before: Ge Qiaode

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant