CN111388968B - Gait balance training device and system based on virtual reality - Google Patents
Gait balance training device and system based on virtual reality Download PDFInfo
- Publication number
- CN111388968B CN111388968B CN202010248053.0A CN202010248053A CN111388968B CN 111388968 B CN111388968 B CN 111388968B CN 202010248053 A CN202010248053 A CN 202010248053A CN 111388968 B CN111388968 B CN 111388968B
- Authority
- CN
- China
- Prior art keywords
- training
- module
- balance
- real
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 357
- 230000005021 gait Effects 0.000 title claims abstract description 55
- 238000012545 processing Methods 0.000 claims abstract description 44
- 230000009471 action Effects 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims description 37
- 238000004891 communication Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 24
- 230000008859 change Effects 0.000 claims description 13
- 230000006399 behavior Effects 0.000 claims description 7
- 230000036962 time dependent Effects 0.000 claims description 4
- 238000012360 testing method Methods 0.000 description 38
- 238000011156 evaluation Methods 0.000 description 24
- 230000000007 visual effect Effects 0.000 description 15
- 230000033001 locomotion Effects 0.000 description 12
- 210000003128 head Anatomy 0.000 description 11
- 230000009286 beneficial effect Effects 0.000 description 9
- 210000002683 foot Anatomy 0.000 description 8
- 210000002414 leg Anatomy 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 208000027418 Wounds and injury Diseases 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000012141 concentrate Substances 0.000 description 3
- 230000006378 damage Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 208000014674 injury Diseases 0.000 description 3
- 230000010355 oscillation Effects 0.000 description 3
- 210000001364 upper extremity Anatomy 0.000 description 3
- 208000012886 Vertigo Diseases 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 231100000889 vertigo Toxicity 0.000 description 2
- 208000012260 Accidental injury Diseases 0.000 description 1
- 206010017577 Gait disturbance Diseases 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 210000000617 arm Anatomy 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000004064 dysfunction Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000004744 fore-foot Anatomy 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 230000037230 mobility Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 230000001720 vestibular Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B26/00—Exercising apparatus not covered by groups A63B1/00 - A63B25/00
- A63B26/003—Exercising apparatus not covered by groups A63B1/00 - A63B25/00 for improving balance or equilibrium
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Rehabilitation Tools (AREA)
Abstract
The invention relates to a gait balance training system based on virtual reality, which at least comprises an intelligent eyeshade for displaying images, wherein the intelligent eyeshade at least comprises a balance module, a display module and an image processing module, the image processing module combines a training guide pattern matched with a training mode with an acquired real-time scene to form a real-time training scene and sends the real-time training scene to the display module, and under the condition that a user carries out walking training according to a set training rule based on the real-time training scene displayed by the display module, the balance module records balance data related to time and sends the balance data related to the time to the display module under the condition of receiving a display instruction. The invention enables the user to see the real world by combining the training guide pattern with the real scene of training, and can avoid the danger in the real scene. The invention does not need to collect the gait information and the action information of the user, reduces the arrangement of the sensing device and reduces the complexity of the use of the training device.
Description
Technical Field
The invention relates to the technical field of medical training, in particular to a gait balance training device and system based on virtual reality.
Background
The incidence rate of the old people in China is very high, the falling is closely related to the instable gait of the old people who die due to accidental injuries, and the training of the gait balance is an effective training method for reducing the falling of the old people. The prior art has a plurality of gait training methods which are easy to learn and effective, the grid training method is easy to accept in clinical use, but the training method is limited by places and needs to draw grids on the ground, so that patients can tread corresponding grids to train according to requirements of medical care personnel, the balance training needs long time and can be trained step by step, generally about 3 months, even longer, the inpatients cannot be hospitalized for a long time, the patients need to be trained at home, no professional guidance and supervision exist, and the patients are difficult to insist. Therefore, an electronic virtual gait balance training instrument is needed, which is convenient for training, is not limited by places, and can record the time of each training, the mode of gait training and the body shaking amplitude, thereby playing the roles of encouraging and supervising the patient.
For example, chinese patent CN 109731292 a discloses a balance ability testing and training system and method based on virtual reality technology, the system includes: the test customizing module is used for customizing a test scheme; the training customization module is used for generating a training scheme according to the balance ability report; the virtual scene module is used for presenting a simulation virtual training scene according to the test scheme and creating a virtual training scene according to the training scheme to perform balance training; the gesture capturing module is used for acquiring motion data and barycentric coordinates in a simulated virtual training scene, outputting the motion data and the barycentric coordinates to the test evaluation module to form a balance capability report, and performing gesture detection and barycentric coordinates acquisition in the virtual training scene. The drawback of this patent is that motion data and barycentric coordinates need to be captured, the data analysis is complex and the error rate is high. Moreover, the same motion has individual differences, the motion capture is complex and has great difficulty, and the final evaluation result has great error due to the individual differences.
Chinese patent CN108648797A discloses a portable rehabilitation training system and method for household and hospital dual-purpose for vestibule rehabilitation based on virtual reality equipment, the training system comprises a scale evaluation module, a virtual reality equipment, a positioning sensor, a training self-evaluation module and a training storage module, the scale evaluation module is provided with a vertigo and balance obstacle scale which is used for evaluating the disease level of a trainee, the virtual reality device is provided with a plurality of training videos for vestibular dysfunction rehabilitation, the plurality of training videos are divided into training levels corresponding to disease levels, the positioning sensor is respectively connected with the virtual reality equipment and the training self-evaluation module and is used for sensing the rehabilitation training condition of the body part of the trainee, and transmitting the sensing result to a training self-evaluation module, and finishing self-evaluation and evaluation by the training self-evaluation module according to the received sensing result. This patent also has the problem that data operation is complicated to have the risk of falling injury after personnel train the vertigo, there is not relevant auxiliary personnel or equipment in the scene and avoids the risk of injury.
Chinese patent CN 110755083A discloses a rehabilitation training method, a sports evaluation device, a virtual reality device and a rehabilitation training system based on virtual reality. The method comprises the following steps: the method comprises the steps of collecting motion signals of a user, sending the motion signals to an upper computer, enabling the upper computer to fuse the motion signals and a virtual reality scene to obtain a fusion picture, analyzing the fusion picture to obtain an analysis result, collecting gait data of the user after training of the user is finished, and sending the gait data to the upper computer, enabling the upper computer to calculate balance ability indexes of the user according to the gait data. On one hand, the falling risk of the user can be reduced, and on the other hand, the balance ability of the user can be known in time. The gait data and the fusion picture are compared and analyzed, and the error rate is high. Moreover, in the case of individual differences, the gait condition is used to measure the balance itself, and the calculation error of the balance index of the user is increased for the user with extreme physical differences.
The Chinese patent discloses a multifunctional balance evaluation training system, which is applied to the field of human body balance ability training and evaluation, comprises three functional modules of data acquisition, data processing and balance training, and structurally comprises a ground surface pressure center test board, an attitude sensor, a handrail frame, a data processing device, an external information display device and VR glasses; the armrest frame is used for preventing accidents such as falling of a subject and the like, and protecting personal safety; the ground surface pressure center test board and the attitude sensor collect human body balance data and transmit the data to the data processing device through wi-fi communication; based on a human body balance analysis algorithm, a human body balance capability evaluation result and parameters are displayed on an external information display device and VR glasses; the multifunctional balance evaluation training system comprises three working modes of evaluation, autonomous training and auxiliary training, provides a balance test and training virtual game scene for a subject, improves the interestingness of the balance capability evaluation and training process, and is simple in structure, simple and convenient to operate and convenient to popularize. The ground surface pressure center test bench of the balance training system is large in size, inconvenient to carry and not suitable for being used at home, the data analysis algorithm is complex, the more complex algorithm and the error generated by the calculation times are larger, and the balance training system is also not beneficial to individual differentiation balance analysis.
As shown above, there are a large number of products in the existing market that perform balance training through virtual techniques, and all have the following problems: firstly, training is performed through a completely virtual reality scene, so that a user is separated from a real scene where the user is located. The real scene can not be completely empty, so in the balance training process, the user is difficult to avoid the collision and the fall with the objects or the walls in the real scene, the safety factor is not high, and the independent training is difficult to carry out. Secondly, because the user cannot see the real scene, people or objects moving to the training area in the training place are difficult to avoid, and therefore, potential safety hazards exist. Even if a specially-assigned person takes care of the health care, the danger is difficult to avoid completely. Thirdly, the balance training devices in the prior art are all provided with gait information sensing devices or motion capture devices, so that the information acquisition and data of the balance training become complicated. Namely, the prior art can not evaluate the training result by separating from the gait information or the action information, and is not simplified enough.
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a gait balance training method and a gait balance training system based on virtual reality. Aiming at the defect that a virtual reality scene is separated from a real training scene in the prior art, the gait balance training system based on the virtual reality at least comprises an intelligent eye patch used for displaying images, and is characterized in that the intelligent eye patch at least comprises a balance module, a display module and an image processing module, the image processing module combines a training guide pattern matched with a training mode with an acquired real-time scene to form a real-time training scene and sends the real-time training scene to the display module, and under the condition that a user walks and trains according to a set training rule based on the real-time training scene displayed by the display module, the balance module records balance data related to time and sends the balance data related to the time to the display module under the condition that a display instruction is received. The invention enables the user to see the real world by combining the training guide pattern with the real scene of training, thereby avoiding the danger in the real scene. The invention does not need to collect the gait information and the action information of the user, thereby reducing the arrangement of a sensing device, and reducing the volume, the weight and the use complexity of the training device.
Aiming at the difficulty of the user for the selection of the training mode, the system also comprises at least one terminal which is connected with the intelligent eyeshade in a wireless mode, and the terminal is provided with a training module for selecting the training mode. The image processing module updates the training guide pattern based on the training mode information sent by the training module and forms a real-time training scene. The training module assesses a level of rehabilitation based on time-dependent balance data sent by the balancing module, with the smart eyewear connected to a terminal. Through the association of the intelligent eyeshade and the terminal, the training mode of the terminal can be selected and set by the user, and the training mode can also be selected and set by medical personnel in a long-distance or short-distance mode, so that the user can perform balance training in the training mode appropriate to the current rehabilitation level.
Aiming at the problem of error action in the balance training process of the user, under the condition of walking training, the image processing module determines effective action parameters of the user based on the step positions acquired by the image acquisition module and the matching degree parameters of the specified area set based on the training rule. In the present invention, the user's steps are placed in the wrong area, and the user's actions are invalid. The invention is effective by calculating the action of stepping on the correct area, further positively encourages the user to carry out accurate balance training and is beneficial to comprehensively evaluating the rehabilitation result of the user.
Aiming at the problem that the receiving and understanding efficiency of the middle-aged and old people to the training rules is low, the intelligent eyeshade further comprises a voice module, and under the condition that the training mode is determined, the voice module feeds back voice information related to the behavior result to the user in a voice mode based on the step position and the matching degree parameter of the specified area set based on the training rules. The invention informs or explains the training rule information of the user through voice, so that the user can quickly understand the training rule when observing the real-time training scene through the display module, and the introduction and explanation time of the medical staff to the training mode is reduced.
In the prior art, the detection of balance data needs to calculate the coordination of the actions of the upper body, legs and steps, the data acquisition is redundant, the calculation is complex, and the effect is poor. The balance module is an electronic pendulum type balancer, and in the walking training process, the electronic pendulum type balancer sends the swing amplitude parameters related to time to the training module to be stored and/or evaluate the rehabilitation grade. The electronic pendulum type balancing instrument arranged in the intelligent eye patch can change the pendulum amplitude of oscillation along with the oscillation of the head of a user, can visually reflect the oscillation condition of the user through data, is also beneficial to the understanding of common users, particularly old users, on balance data, and improves the matching degree of the user in matching balance training.
In the prior art, the balance training needs the user to perform the specified training of the upper limb movement and the lower limb movement, and the user generally has the condition of low completion degree in the training process, so that considerable and accurate balance data is difficult to obtain. The image processing module adaptively combines the corresponding training guide pattern with the real-time ground scene acquired by the image acquisition module based on the training mode information sent by the training module to form a real-time training scene corresponding to the training rule. According to the invention, the training guide pattern is only arranged on the ground, so that the difficulty in combining a virtual scene and a real scene is reduced, the user only pays attention to the step position, and the upper limb can act without restriction. In the training process of the user, the user naturally concentrates the visual field on the ground and the step position of the user, so that the gait information is not required to be acquired by a special device, the step position information can be acquired only through the visual field image, and the mode of acquiring the step position information of the user is simplified.
Aiming at the problems of complex virtual reality scene display, low user matching degree and low completion degree, the training guide pattern is a geometric pattern with guide factors and ordered arrangement. The guiding factors include at least voice information, shape features, color features, numerical features, character features and/or symbol features related to geometric patterns. The geometric training guide pattern is beneficial to the understanding of the middle-aged and the elderly to the training mode, thereby improving the understanding degree, the completion degree and the matching degree of the training behaviors of the middle-aged and the elderly. The condition of signal loss in the training process is considered, and the loss of the rehabilitation result caused by data loss in the training process is avoided. The intelligent eye patch further comprises at least one storage module, the storage module temporarily stores the balance data related to the time under the condition that the first wireless communication module of the intelligent eye patch is disconnected with the second communication module of the terminal, and the storage module sends the stored balance data related to the time and the corresponding training mode information to the training module of the terminal under the condition that the first wireless communication module is connected with the second communication module. The invention can temporarily store the data through the arrangement of the storage module, and transmits all the data after the communication connection, thereby ensuring the storage of the data and avoiding the data loss.
The first wireless communication module of the intelligent eyeshade is connected with a cloud server, the cloud server is connected with at least one terminal, training mode information sent by a training module and balance data sent by the intelligent eyeshade are stored in the cloud server, and a suggestion module in the cloud server matches a corresponding training suggestion scheme based on the training mode information and the corresponding balance data and feeds the training suggestion scheme back to the terminal and/or the intelligent eyeshade. Through the setting of cloud ware for medical personnel carry out the rehabilitation aassessment through remote terminal according to balanced data and realize. Further, for the current big data era, balance data of several training subjects and corresponding rehabilitation levels, training advice schemes corresponding to the rehabilitation levels can be stored in an associated manner. The training suggestion scheme of the cloud server based on balance data feedback association can reduce the burden of medical staff. The medical staff only need to make further supplementary advice to the advice scheme remotely.
The image processing module adjusts the display angle of the training guide pattern combined with the image acquisition module based on the angle change of the real-time scene acquired by the image acquisition module, so that the real-time training scene can change the real angle according to the visual field angle of the user. The visual field of a person is continuously changed in the training process, so that the training guide pattern needs to be matched with a real scene in the visual field to change, and behavior matching and accurate action of a user are facilitated.
Drawings
FIG. 1 is a schematic block diagram of the present invention;
fig. 2 is a schematic illustration of a training guide pattern of the present invention.
List of reference numerals
100: an intelligent eye patch; 110: a balancing module: 120: a display module; 130: an image processing module; 140: a first wireless communication module; 200: a terminal; 210: a display screen; 220: a training mode adjustment module; 230 a second wireless communication module.
Detailed Description
The following detailed description is made with reference to fig. 1 and 2 of the drawings.
The device for balance training through the virtual reality environment in the prior art needs the user to wear the eyeshade, and still needs the user to wear corresponding sensing devices on the arms, hands, legs, knees and feet to combine with the virtual scene, so that the virtual reality scene is presented. However, the virtual reality scene deviates from the real training scene, and particularly, the training of the user at home is not the same as the home furnishing of the user, and a space large enough to complete the action in the virtual reality scene is not provided. Moreover, because the balance training is performed without the real scene, the user is difficult to avoid objects or people around the training scene, and a large safety risk exists. If a specially-assigned person is arranged for accompanying training, a large amount of human resources are occupied. That is, in the prior art, the user cannot perform the belly balance training through the virtual reality scene. Particularly, users who need to perform balance training are patients or middle-aged and elderly people whose own balance is difficult to control, especially the elderly people, and therefore, the training mode of the balance training, the understanding degree of the training mode by the patients, the completion degree and the matching difficulty are all important problems to be solved by the invention. The invention aims to solve the technical problem of how to combine a virtual scene with a real scene to form a real-time training scene capable of guiding a user to carry out balance training in the real scene under the condition that wearable sensing equipment is not used for collecting posture or gait information of the user.
The device or the rehabilitation training device and the system combining virtual and reality. The invention is not limited to the application of rehabilitation training, and can also be applied to the field of games of multiplayer games.
Example 1
As shown in fig. 1, the present invention provides a gait balance training method and system based on virtual reality. The gait balance training system based on virtual reality at least comprises an intelligent eye patch 100 and a terminal 200. The intelligent eyecup 100 is connected with the terminal 200 in a wireless manner. The wireless mode may be, for example, a bluetooth signal connection, a WiFi signal connection, or other wireless signal connections, as long as the wireless connection capable of wirelessly transmitting information is implemented. The intelligent eyeshade 100 is in data connection with the second wire communication module 230 of the terminal 200 through the first wireless communication module 140. The first wireless communication module 140 and the second wireless communication module 230 include, but are not limited to, a bluetooth signal communicator, a WiFi signal communicator, etc. The intelligent eyeshade 100 and the terminal 200 are both provided with corresponding power supplies and switches, and can be independently started and closed. Preferably, the power source is a rechargeable power source or a battery power source with replaceable batteries. Preferably, the intelligent eyeshade of the invention is not connected with a terminal, and can also carry out normal balance training. For example, a training module is also provided in the intelligent eyeshade, and is selected by the user through a button from the training mode content displayed by the display module.
The intelligent eyewear 100 of the present invention includes an image capture module. The image acquisition module can acquire a 3D stereoscopic reality scene in a user field of view. In the case of head movement of the user, the image acquisition module is arranged at any position capable of acquiring a scene of the field of view of the user. The image acquisition module comprises at least one image acquisition device. Preferably, the image acquisition module is preferably 2 image acquisition devices based on human body structure principle. The image acquisition device of the invention comprises various devices which can realize image acquisition in the prior art, such as a camera, a camera and the like. The terminal 200 of the present invention includes a non-movable terminal such as a desktop function device as well as a movable terminal. The mobile terminal can be a mobile phone, a mobile computer, a tablet computer, an intelligent bracelet, intelligent glasses and the like, and can execute the terminal.
The gait balance training system based on virtual reality at least comprises an intelligent eye patch 100 used for displaying images, wherein the intelligent eye patch 100 at least comprises a balance module 110, a display module 120 and an image processing module 130, the image processing module 130 combines training guide patterns matched with training patterns with collected real-time scenes to form real-time training scenes and sends the real-time training scenes to the display module 120, and under the condition that a user carries out walking training according to set training rules based on the real-time training scenes displayed by the display module 120, the balance module 110 records balance data related to time and sends the balance data related to the time to the display module 120 under the condition of receiving display instructions. The invention enables the user to see the real world by combining the training guide pattern with the real scene of training, thereby being capable of training and avoiding the danger in the real scene. The invention does not need to collect the gait information and the action information of the user, thereby reducing the arrangement of a sensing device, and reducing the volume, the weight and the use complexity of the training device.
The real-time training scene in the invention is a virtual real scene which is the combination of a virtual image and a real scene. The user can see the real scene and see the virtual scene in the real scene. For example, the display module is a transparent module, and a user can see a real living room scene, particularly a floor scene of the living room through the display module. The image processing module combines and displays the training guide pattern with the ground scene adaptability. The user can see the training guide pattern on the ground while seeing the real scene on the display module. Preferably, the image processing module performs three-dimensional combination on the training guide pattern and the real scene to form a real-time training scene. In the case where the user's head is rotated to look at the real scene, the training guide pattern displayed on the ground scene is also adaptively changed, so that the user feels that a real training guide pattern exists on the ground of the real scene, for example, a real lattice pattern exists on the ground. After the user steps on the grid, the image processing module truly shows the relative position change of the training guide pattern and the position of the user, so that the user can generate the feeling of really stepping on the training guide pattern. Therefore, the user can avoid obstacles during the balance training process and easily grab objects around the user, such as table edges, during falling to avoid falling.
Preferably, the system further comprises at least one terminal 200 wirelessly connected with the intelligent eyewear 100, the terminal 200 being provided with a training module 220 for selecting a training mode. The image processing module 130 updates the training guide pattern based on the training pattern information sent by the training module 220 and forms a real-time training scenario. In the case where the intelligent eyewear 100 establishes a connection with the terminal 200, the training module 220 evaluates the rehabilitation level based on the time-dependent balance data transmitted by the balancing module 110. Through the association of the intelligent eyeshade and the terminal, the training mode of the terminal can be selected and set by the user, and the training mode can also be selected and set by medical personnel in a long-distance or short-distance mode, so that the user can perform balance training in the training mode appropriate to the current rehabilitation level. The rehabilitation grade may correspond to the balance data, e.g., the balance data corresponds to the corresponding score direction after completion of a balance training session. And each score range corresponds to a rehabilitation level.
Preferably, in the case of performing walking training, the image processing module 130 determines effective action parameters of the user based on the step positions acquired by the image acquisition module and the matching degree parameters of the designated area set based on the training rule. For example, combining a grid pattern with a ground scene in a real scene, the user can see the real scene as well as the grid pattern of the ground. The grid pattern arrangement is combined with ground scene adaptability. The user trains according to preset training rules, for example, walks a designated list of grids. The step location falls within the exact bin, the step is valid, otherwise it is invalid. During the period, balance data collected by the balance module intuitively forms a reference for the balance ability and the rehabilitation result of the user.
Preferably, the intelligent eyeshade further comprises a voice module, and in the case that the training mode is determined, the voice module feeds back voice information related to the behavior result to the user in a voice mode on the basis of the step position and the matching degree parameter of the specified area set on the basis of the training rule. The voice module is a voice device capable of feeding back voice information. The voice device at least comprises a language storage chip and a voice transmission component, so that the user is provided with guidance and prompt of the training rules of the multi-language. The invention informs or explains the training rule information of the user through voice, so that the user can quickly understand the training rule when observing the real-time training scene through the display module, and the introduction and explanation time of the medical staff to the training mode is reduced.
Preferably, the balancing module of the present invention may be a gyroscope or a gyroscope combined balancer, or may be any one of balancing devices or balancing assemblies capable of obtaining balancing data in the prior art. The balancing module 110 in the present invention is preferably an electronic pendulum balancer in consideration of the comprehensibility of the middle aged and the elderly, particularly, the cultural level and comprehensibility of the elderly over 50 years old. The electronic pendulum type balancer comprises a pendulum bob and a pendulum shaft. Wherein the rotation angle of the pendulum shaft can be output in an electronic data manner. During the swinging of the head, the bell hammer can swing adaptively. The larger the amplitude of the head swing, the higher the swing frequency, and the larger the amplitude of the pendulum swing. The smaller the amplitude of the head swing, the smaller the amplitude of the pendulum swing. The invention does not need to carry out accurate data recording on the balance degree of each action, and only needs to know the integral balance state of the user in the balance training process. Therefore, the electronic pendulum balancer has an advantage of intuitive understanding in a balanced state of the head. Preferably, the electronic pendulum balancer is arranged on the surface of the intelligent eye mask, so that a user can visually observe the balancing capacity when training in a face of the mirror. High-tech gyroscopes obviously do not have the advantages of visual observation and are costly. During the walking training process, the electronic pendulum balancer transmits the time-dependent swing parameters to the training module 220 for storage and/or evaluation of rehabilitation grade. Preferably, the electronic pendulum balancer can be replaced by a pendulum tilt sensor.
Preferably, the image processing module 130 adaptively combines the corresponding training guide pattern with the real-time ground scene acquired by the image acquisition module based on the training mode information sent by the training module 220 to form a real-time training scene capable of corresponding to the training rule. According to the invention, the training guide pattern is only arranged on the ground, so that the difficulty in combining a virtual scene and a real scene is reduced, the user only pays attention to the step position, and the upper limb can act without restriction. In the training process of the user, the user naturally concentrates the visual field on the ground and the step position of the user, so that the gait information is not required to be acquired by a special device, the step position information can be acquired only through the visual field image, and the mode of acquiring the step position information of the user is simplified. Preferably, after the user switches the 180-degree field of view, the image processing module recombines the real scene in the field of view of the new angle with the training guide pattern and displays the real scene as a real-time training scene. The user can both see the home environment of oneself clearly, can see the training guide pattern on ground clearly again, can alone carry out balance training safely to can in time catch the auxiliary object around the training position when empting, avoid dangerous emergence. The invention has the advantages that the manpower waste of the accompanying personnel is reduced, and the balance training is interesting.
Preferably, the training guide pattern is a geometric pattern having an ordered arrangement of guiding factors. The geometric training guide pattern is beneficial to the understanding of the middle-aged and the elderly to the training mode, thereby improving the understanding degree, the completion degree and the matching degree of the training behaviors of the middle-aged and the elderly. Through tests, the completeness of middle-aged and elderly people is very high and is close to 99%. The ordered geometric patterns are regular or irregular patterns such as triangles, rectangles, polygons, honeycomb patterns, spider-web patterns, and the like, which are arranged in various arrays. The geometric pattern of the present invention may be planar or may be solid. Preferably, the geometric pattern combined with the ground scene is planar, which is beneficial to reduce the encumbrance of the user. The guiding factors of the invention at least comprise voice information, shape characteristics, color characteristics, numerical characteristics, character characteristics and/or symbol characteristics and pattern characteristics related to geometric patterns. For example, different colors, different numbers or symbols are displayed in the plurality of square lattice arrays. The training rule is that the single leg jumps into a grid with the number 3 or the single leg jumps into a red grid. The pattern feature is, for example, a puppy pattern.
Preferably, the intelligent eyewear further comprises at least one memory module. Under the condition that the first wireless communication module of the intelligent eye patch is disconnected with the second communication module of the terminal, the storage module temporarily stores the balance data related to time, and under the condition that the first wireless communication module is connected with the second communication module, the storage module sends the stored balance data related to time and the corresponding training mode information to the training module of the terminal. The invention can temporarily store the data through the arrangement of the storage module, and transmits all the data after the communication connection, thereby ensuring the storage of the data and avoiding the data loss. The data are transmitted to the terminal at one time after the training is finished, the user is prevented from continuously paying attention to the training data in the process of balance training to interfere with the attention in the training process, and the user with poor balance ability can concentrate on the balance training.
Preferably, the first wireless communication module of the intelligent eyeshade is connected with a cloud server, the cloud server is connected with at least one terminal, the cloud server stores training mode information sent by a training module and balance data sent by the intelligent eyeshade, and an advice module in the cloud server matches a corresponding training advice scheme based on the training mode information and the corresponding balance data and feeds the training advice scheme back to the terminal and/or the intelligent eyeshade. The cloud server has the advantages of large data storage capacity and high data processing efficiency. Therefore, the data processing of the training mode can be set in the cloud server, and the remote medical rehabilitation supervision of the medical staff for the user is facilitated. Further, for the current big data era, balance data of several training subjects and corresponding rehabilitation levels, training advice schemes corresponding to the rehabilitation levels can be stored in an associated manner. The training suggestion scheme of the cloud server based on balance data feedback association can reduce the burden of medical staff. The medical staff only need to make further supplementary advice to the advice scheme remotely.
Preferably, the image processing module 130 adjusts the display angle of the training guide pattern combined with the real-time scene based on the angle change of the real-time scene acquired by the image acquisition module, so that the real-time training scene performs real angle change according to the visual field angle of the user. The visual field of a person is continuously changed in the training process, so that the training guide pattern needs to be matched with a real scene in the visual field to change, and behavior matching and accurate action of a user are facilitated. For example, when the head of the user rotates left and right, the angle of the real scene in the visual field changes, at the moment, the angle of the geometric pattern changes correspondingly, the real change of the real-time training scene is realized,
preferably, the training mode of the present invention further includes a real-time training scene optimization mode. Under the condition that the image acquisition module acquires the real scene image, the image processing module 130 calculates based on the ground scene in the acquired real scene and preferentially selects the setting mode of the training guide pattern in a mode beneficial to the training angle and the training distance of the user, so that the real-time training scene formed by combination is further and effectively balanced in training. The users who carry out balance training are mainly the old, and the training starting position where the old stands when carrying out training is subjective, and the real-time scene in the visual field is not necessarily the most suitable position for training. The elderly sometimes have a poor thinking ability, slow response, and may not be able to quickly determine a preferred training position. If the real-time training scene is displayed only according to the angle and the position of the user, the user can not be opened for balance training or the angle is frequently changed for training. For example, the elderly people standing in the center of the living room are not conducive to setting a sufficient number of training guide patterns for balance training. Therefore, the image processing module sets the training guide pattern in a living room corridor that is convenient for the user to move and long enough based on the ground scene information, and prompts the client to start balance training while standing at the training start position based on the real-time training scene displayed by the display module. According to the invention, the training guide pattern is not set only according to the real scene shown by the standing angle and position of the user, but is set based on the collected real scene information and displayed to the better position of the user for starting training, so that the user can obtain better balance training experience and training effect.
The invention only adopts the intelligent eyeshade to carry out balance training, thereby reducing error guidance and error evaluation of training effect in the balance training process caused by the error wearing of the sensing equipment. The intelligent eyeshade has the advantages that a person can wear the eyeshade on the corresponding part of eyes from the head part in a correct mode, and the nose bridge clamping part on the intelligent eyeshade is beneficial to correct wearing of the eyeshade in front of the eyes, so that the eyeshade is prevented from inclining or horizontally shifting. Therefore, the image acquisition device can accurately acquire the real scene in the visual field of the user, and the user can also perform balance training with reality. In the prior art, a calculation method of balance parameters is complex, and an evaluation result and acquisition parameters have no visual logical relationship. The user receiving the balance training results also does not know from which angles to consciously control the balance. The invention adopts the head swing information to visually embody the balance state, reduces the defects of large-scale data acquisition and operation, reduces the calculation steps and complexity of balance data, and is also beneficial to the understanding of a user on the balance training result, thereby promoting the conscious control balance of the user in the balance training process and forming the positive, encouraging and exciting effects.
Example 2
The gait balance training method based on the virtual reality comprises the following steps:
the user wears the intelligent eyeshade, and the power supply and the communication equipment are started. After the user selects the training mode through the intelligent eyeshade and/or the terminal, the image acquisition module acquires the real scene information in the user's view and sends the information to the image processing module 130. The image processing module obtains a training guide pattern based on the training information sent by the training module. The image processing module combines the real scene information with the training guide pattern to form a real-time training scene and sends the real-time training scene to the display module 120. The display module displays the real-time training scene in front of the eyes of the user. The voice module transmits training rule information to the user. In the case that the user performs walking training according to the set training rule based on the real-time training scene displayed by the display module 120, the balance module 110 records the balance data related to time and transmits the balance data related to time to the display module 120.
For example, the training guide pattern is a stepping training. As shown in fig. 2, different numerical information is displayed within the grid. The training mode comprises the following steps:
(1) the 1 st is bright, the training items are that firstly, the user stands on one foot and secondly, the user stands on one sole (opens and closes eyes);
(2) the first row and the second row in the transverse direction are 1-1 bright, the training items are firstly, the feet of the user can be stepped and warmed, and the user can stand with the palms of the feet (eyes are opened and closed);
(3) the first column in the longitudinal direction is 1-2 grids bright, and the training items are that the heel of the front foot and the front sole of the rear foot stand;
(4) the first column is bright, and the training item is that the straight heel moves next to the toe;
(5) the first longitudinal row is bright in odd number, the second longitudinal row is bright in even number, and the training items comprise a step walking step of double feet, a step walking step of double feet and a step walking step;
(6) the first column is single bright, and the training item is striding walking;
(7) the first 1 second 2 third 3 second 4 first 5 second 6 third 7 second 8 first 9 columns are bright, forming a "" Z "" walking and a forefoot walking;
(8) the first column 1, the second column 2 and the third column 1 are bright, and the training items are forward, backward and cross steps;
(9) the first column, the second column 9, the third column, the fourth column 1 and the fifth column are lighted up, and the "S" line goes away;
(10) the first, second, third, fourth and fifth columns 1, 9, 1, 9 are lit up, and the circle of "∘" runs;
(11) the first and second columns are simultaneously illuminated, and the single, double, single, even number "8" is walked.
Preferably, the number information in the square can also be color information, symbol information or character information. For example, walking is performed according to the text message of "happy and healthy".
After the user correctly steps in the corresponding grid, the training module adds scores to the grid, and the training module does not score when the user mistakenly steps in the grid. Preferably, the scoring module is not limited to a training module, but can also be a data processing module disposed in the intelligent eyeshade. The image processing module, the training module and the data processing module can be special integrated chips, namely chips loaded with corresponding data processing programs.
Preferably, the invention can be combined with a balance function evaluation mode method to realize the evaluation of balance parameters.
For example, the balance parameters may include not only swing amplitude data of the head, but also an active gait index, a functional extension test parameter, a multi-directional extension test parameter, a balance scale score, a single leg balance test parameter, a balance function self-sensation test parameter, a modified gait irregularity scale parameter.
And the active gait index is used for evaluating the ability of the testee to adjust the gait change, and requires the testee to test 8 different gaits, each gait score is 0-3, the gait scores represent the scores from poor to excellent 4 grades respectively, and the total score is 0-24. A higher score suggests better balance, and a score less than 18 suggests an increased risk of falling. In the training test process, the user can see the combined real-time training scene through the intelligent eyeshade, and the user instructs to perform balance training based on the received training format. In the process of balance training, the data processing module calculates the active gait index based on the gait information acquired by the image acquisition module. Because the user can see a real scene, the gait is more natural, and the obtained balance parameters are more accurate.
And the functional stretching test is used for testing the body position control capability and the static balance capability. The maximum distance of extension of the subject's arm (upright, 90 degrees shoulder flexion) less than 7 inches suggests limited mobility, which can be accomplished in 5 minutes. Under the condition that the image acquisition module determines that training starts by acquiring a real scene, the data acquisition module starts to record the extending distance of the arm of the user in the image acquisition module. Because the user can see the real scene and train the guide pattern, then can the adaptability avoid the barrier and select the direction that the arm stretches forward to can not touch the barrier owing to stretching forward, have better balanced test effect.
The multi-direction extension test can simply and effectively measure the stability in four directions (front, back, left and right). Has good positive correlation with the Berg balance scale and good negative correlation with the timing and starting walking experiment. Under the condition of balance training, a user sees a real-time training scene through a display module, and can start training at positions which can be stretched in four directions by self-selection under the condition of no accompanying person. The data processing module collects swing amplitude parameters of the user in the process of stretching in four directions respectively to evaluate the stability of the multi-direction stretching of the user, and the accuracy of balance parameters is better facilitated.
Preferably, the data processing module also calculates and evaluates the balance parameters according to the Berg balance scale. The Berg balance scale is used for evaluating balance and movement functions and can be completed in 15-20 minutes. The daily life test items comprise 14 daily life test items, and each item has a grade of 5 (0-4). The 0 point is incomplete, the 4 points are independently completed, and the total point is 56 points. The higher the score, the better the balance of the tip, and scores less than 45 points indicate a potential for dumping. The data acquisition module evaluates the balance ability of the user based on the swing amplitude parameter of the user and the Berg balance scale. Under the condition that a user can see a real scene and a real-time training scene formed by training guide patterns, the user can perform daily life test items in familiar life scenes, action stiffness caused by lively and careless life scenes is avoided, and more accurate balance parameters and evaluation results can be provided through more relaxed actions. This is not achievable for a fully virtual image intelligent eyeshade. Because the life scene of the complete virtual image is greatly different from the life habits and life scenes of the user, the user is difficult to perform a completely relaxed daily life test, and therefore, the obtained balance parameters have a large error from the daily balance capability.
Single leg evaluation test, subjects were held in balance with one leg standing with eyes open and closed for 5 seconds, respectively. The change of the posture and the gait of the subject in daily life can be reflected, but all falling events cannot be predicted. The test had 36% sensitivity, 76% specificity and 31% positive test value. The balance module can obtain the balance parameters of the swing of the user in the user test process so as to evaluate the body position and gait changes of the user in daily life. Because the user can see the real scene of surveying, can catch the auxiliary object of body side at the moment of falling to avoid the injury of falling.
A balance function self-sensory test to determine the perception of the degree of balance by a subject and the recognition of the risk of falling as a disturbance of daily life. The image processing module combines the real scene with the virtual obstacle image to test the daily life action of the user, so that the user has clear feeling and feeds back balanced information degree. Namely, the intelligent eyeshade can strengthen the self-feeling of the balance function of the user, give an accurate feedback result and undoubtedly greatly improve the accuracy of the test. In the prior art, users recall daily life to perform form filling feedback, accurate feedback results cannot be given, feedback is performed only by means of self perception of the users, and accuracy is low.
The improved gait abnormal grade scale is used for evaluating gait and testing the falling risk of the old people in the community. The gait test method comprises 7 gait tests, wherein each gait test is classified into 4 grades (0-3 points), and the total score is 21 points. The higher the score, the greater the degree of gait irregularity and fall risk. In the gait test in the prior art, special equipment is used for measuring the gait at the appointed place, so that the test is inconvenient for the elderly population and the accompanying personnel are required. According to the intelligent eye patch disclosed by the invention, the old people can wear the intelligent eye patch by themselves and test at home, and the data processing module can evaluate the falling risk parameter of the old people according to the head swing amplitude parameter and the step position parameter in the process of carrying out the balance test based on the real-time training scene by the user.
Preferably, the training guide pattern of the invention can also be used as a game scene for users to play games. In this case, the training guide pattern may be planar or three-dimensional, and may be set according to the game design requirement, thereby increasing the interest. The training guide pattern is not limited to a geometric pattern, and may be any shape of article, building, and living being.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.
Claims (7)
1. A virtual reality based gait balance training system, comprising at least a smart mask (100) for displaying images, characterized in that the smart mask (100) comprises at least a balancing module (110), a display module (120) and an image processing module (130),
the image processing module (130) combines the training guide pattern matched with the training pattern with the acquired real-time scene to form a real-time training scene and sends the real-time training scene to the display module (120),
under the condition that a user carries out walking training according to a set training rule based on a real-time training scene displayed by the display module (120), the balance module (110) records balance data relevant to time and sends the balance data relevant to the time to the display module (120) under the condition of receiving a display instruction;
the system further comprises at least one terminal (200) connected in a wireless manner with the intelligent eyewear (100), the terminal (200) being provided with a training module (220) for selecting a training mode,
the image processing module (130) updates the training guide pattern based on the training pattern information sent by the training module (220) and forms a real-time training scene,
-in case the smart eyewear (100) establishes a connection with a terminal (200), the training module (220) assesses a rehabilitation level based on time-dependent balance data sent by the balancing module (110);
in the case of walking training, the image processing module (130) determines effective action parameters of the user based on the step positions acquired by the image acquisition module and the matching degree parameters of the specified area set based on the training rules;
the image processing module (130) adaptively combines the corresponding training guide pattern with the real-time ground scene acquired by the image acquisition module based on the training mode information sent by the training module (220) to form a real-time training scene capable of corresponding to a training rule.
2. The virtual reality based gait balance training system according to claim 1, characterized in that the intelligent eyeshade further comprises a voice module,
in the case of determining the training mode, the voice module feeds back voice information related to the behavior result to the user in a voice manner based on the step position and a matching degree parameter of a specified area set based on a training rule.
3. The virtual reality based gait balance training system according to claim 2, characterized in that the balancing module (110) is an electronic pendulum balancer,
during the walking training process, the electronic pendulum type balancing instrument sends the swing amplitude parameters related to the time to a training module (220) for storing and/or evaluating the rehabilitation grade.
4. The virtual reality-based gait balance training system according to claim 3, characterized in that the training guide pattern is a geometric pattern with an ordered arrangement of guiding factors,
the guiding factors include at least voice information, shape features, color features, numerical features, character features and/or symbol features related to geometric patterns.
5. The virtual reality based gait balance training system according to claim 4, characterized in that the intelligent eyeshade further comprises at least one memory module,
the storage module temporarily stores the balance data related to time under the condition that the first wireless communication module of the intelligent eye patch is disconnected with the second communication module of the terminal, and
and under the condition that the first wireless communication module is connected with the second communication module, the storage module sends the stored balance data related to time and the corresponding training mode information to the training module of the terminal.
6. The virtual reality-based gait balance training system according to claim 5, wherein the first wireless communication module of the intelligent eyeshade is connected with a cloud server, the cloud server is connected with at least one terminal,
the cloud server stores training mode information sent by a training module and balance data sent by the intelligent eyeshade, wherein,
a suggestion module in the cloud server matches a corresponding training suggestion scheme based on the training pattern information and corresponding balance data and feeds the training suggestion scheme back to the terminal and/or smart eyewear.
7. The virtual reality based gait balance training system according to one of the preceding claims, characterized in that the image processing module (130) adjusts the display angle of the training guide pattern in conjunction with it based on the angle change of the real-time scene acquired by the image acquisition module, so that the real-time training scene makes a real angle change according to the angle of the user's field of view.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010248053.0A CN111388968B (en) | 2020-03-31 | 2020-03-31 | Gait balance training device and system based on virtual reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010248053.0A CN111388968B (en) | 2020-03-31 | 2020-03-31 | Gait balance training device and system based on virtual reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111388968A CN111388968A (en) | 2020-07-10 |
CN111388968B true CN111388968B (en) | 2022-02-01 |
Family
ID=71411276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010248053.0A Active CN111388968B (en) | 2020-03-31 | 2020-03-31 | Gait balance training device and system based on virtual reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111388968B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112933550A (en) * | 2021-01-29 | 2021-06-11 | 重庆工程职业技术学院 | Physical training system and method based on VR technology |
CN113100717B (en) * | 2021-04-25 | 2023-02-28 | 郑州大学 | Naked eye 3D dizziness training system suitable for dizziness patient and evaluation method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104027226B (en) * | 2014-06-17 | 2016-11-23 | 国家康复辅具研究中心 | Walking dynamic balance training device |
KR101716015B1 (en) * | 2015-01-06 | 2017-03-13 | 한국과학기술원 | A health apparatus and a controlling method thererof |
CN109621311A (en) * | 2018-11-14 | 2019-04-16 | 深圳市热丽泰和生命科技有限公司 | A kind of parkinsonism posture gait rehabilitation training method based on augmented reality |
CN109731292B (en) * | 2018-12-29 | 2021-01-01 | 北京工业大学 | Balance ability testing and training system and method based on virtual reality technology |
CN109758157B (en) * | 2019-01-29 | 2020-08-25 | 广州晓康医疗科技有限公司 | Gait rehabilitation training evaluation method and system based on augmented reality |
CN110639172A (en) * | 2019-10-12 | 2020-01-03 | 深圳市热丽泰和生命科技有限公司 | Augmented reality rehabilitation training system based on rich scenes |
-
2020
- 2020-03-31 CN CN202010248053.0A patent/CN111388968B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111388968A (en) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101959079B1 (en) | Method for measuring and evaluating body performance of user | |
AU2017386412B2 (en) | Systems and methods for real-time data quantification, acquisition, analysis, and feedback | |
CN107224261B (en) | Visual impairment detection system using virtual reality | |
CN104436596B (en) | Device and motion support method are supported in motion | |
JP6332830B2 (en) | Exercise support system, exercise support method, and exercise support program | |
JP3570163B2 (en) | Method and apparatus and system for recognizing actions and actions | |
CN108986884A (en) | The training system and method that a kind of balanced rehabilitation and cognitive rehabilitation blend | |
US20140142439A1 (en) | Cognitive function evaluation and rehabilitation method and system | |
US20130171596A1 (en) | Augmented reality neurological evaluation method | |
US20110131005A1 (en) | Mobile recording apparatus, body movement measuring apparatus, information processing apparatus, movement pattern determining apparatus, activity amount calculating apparatus, recording method, body movement measuring method, information processing method, movement pattern determining method, activity amount calculating met | |
KR101638819B1 (en) | Gait correction system and control method for the same | |
KR101911179B1 (en) | Virtual reality and emg feedback-based rehabilitation training system | |
WO2015190042A1 (en) | Activity evaluation device, evaluation processing device, and program | |
CN102046079A (en) | Systems and methods for performing surface electromyography and range-of-motion tests | |
CN111388968B (en) | Gait balance training device and system based on virtual reality | |
KR102481501B1 (en) | smart golf exercise equipment, system method thereof | |
CN105534500B (en) | The equilibrium function assessment device and method of a kind of integration of physiological parameter monitoring | |
CN113571153A (en) | Passive training sensing system for limb rehabilitation of cerebral palsy patient and client side thereof | |
JP6844284B2 (en) | Gait evaluation method | |
CA3152977A1 (en) | Systems and methods for wearable devices that determine balance indices | |
CN115738188A (en) | Balance function training device and method based on virtual reality technology | |
JP2015123216A (en) | Moving action state displaying device, method and system, and program | |
Postolache et al. | Virtual reality and augmented reality technologies for smart physical rehabilitation | |
KR20190136314A (en) | Virtual reality-based visual evaluation and prescription system | |
CN111739374A (en) | Alzheimer disease early screening training system based on virtual reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |