Detailed Description
The present application is described in detail below with reference to the attached drawings.
The application provides an immersive upper limb rehabilitation system, which mainly comprises a robot body 6, an active and passive control platform and a virtual reality platform.
The virtual reality platform comprises a display device, and the display device can be a conventional display, a VR head-mounted virtual reality device, an AR head-mounted virtual reality device or an MR head-mounted virtual reality device. The virtual training environment may be displayed to the patient using the display, as well as providing rehabilitation training visual feedback to the patient.
When the rehabilitation system enters an active training rehabilitation mode, the virtual reality platform displays the constructed virtual training environment to a patient through the display equipment, and establishes a first virtual three-dimensional model corresponding to a part of real objects and/or a second virtual three-dimensional model of virtual objects corresponding to the current virtual training environment.
A real object refers to an object that is present in reality, i.e. a patient using the rehabilitation system. The partial real object refers to a body part of the patient, which may be specifically an upper limb or an upper half of the patient, or the like. And establishing a first virtual three-dimensional model corresponding to part of the real object, namely the virtual three-dimensional model which is constructed in a simulation mode in the virtual training environment, wherein the first virtual three-dimensional model can immediately follow the motion of the upper limb of the patient, namely the motion of the upper limb of the patient is mapped on a display.
A virtual object refers to a virtually constructed object for guiding patient rehabilitation training. The virtual objects correspond to the current virtual training environment, i.e., may be changed to different types of objects accordingly with the virtual training environment. Preferably, the virtual object may be a virtual character in line with the patient who makes a rowing motion in a virtual training environment. The virtual object is in a position observable by the patient in the virtual training environment. For example, the virtual object may be a virtual crew sitting in front of the first virtual three-dimensional model corresponding to the patient, co-riding in a ship with the patient. The action performed by the virtual object should be a standard action corresponding to the current virtual training environment.
The existing upper limb rehabilitation robots mostly adopt virtual environments, and can map the motion of a patient to a virtual object to realize the immersion sense of the patient during rehabilitation training, however, in the technical scheme, the robot can only actively acquire the motion data of the patient and cannot feed back whether the motion of the patient is normal or not in real time, or can remind whether the motion of the patient is normal or not in real time in a text voice mode, the text voice is not enough for the patient to understand how to achieve the required motion, and often requires medical care to accompany and specifically guide on one side, so that the medical care workload is large and the medical care is required to be performed in a whole-course nursing rehabilitation process one by one. Based on this, the application proposes to adopt a second virtual three-dimensional model except for the first virtual three-dimensional model corresponding to the patient, so that the patient can observe correct standard actions intuitively without only paying attention to the mapping of the movement of the patient in a virtual environment, and the aim of effectively guiding the rehabilitation training of the patient is further fulfilled. In addition, the two virtual three-dimensional models are synchronously displayed in the virtual environment at the same time, the patient can directly see the difference between the motion and the standard action, and even under the condition of no medical care guidance, the patient can also actively adjust the action of the patient to better match the standard action, so that the rehabilitation training effect is greatly improved while the sufficient immersion sense of the patient is realized during the rehabilitation training.
Preferably, the virtual training environment may be a virtual space that changes the scene as the patient makes a rowing motion using the upper limbs. The virtual training environment may be, for example, a boat on a river, submarine, air craft, hull on a track, and so forth.
The virtual reality platform includes at least two guidance modes, including at least a first guidance mode and a second guidance mode. In the first guiding mode, when the patient enters the virtual training environment, the first virtual three-dimensional model corresponding to the patient sits at the position of the bow, and at the moment, the second virtual three-dimensional model does not exist in the virtual training environment. In the second guiding mode, when the patient enters the virtual training environment, the first virtual three-dimensional model corresponding to the patient is seated at a position other than the bow, namely the midship or the tail, and the virtual training environment has the second virtual three-dimensional model at the moment.
Preferably, when the virtual reality platform is started in the first guidance mode, the virtual reality platform displays the virtual training environment constructed by the virtual reality platform to the patient through the display device, and establishes the first virtual three-dimensional model corresponding to part of the real object, and the current virtual training environment has no second virtual three-dimensional model.
Preferably, when the virtual reality platform is started in the second guidance mode, the virtual reality platform displays the virtual training environment constructed by the virtual reality platform to the patient through the display device, and establishes a first virtual three-dimensional model corresponding to a part of the real object and a second virtual three-dimensional model corresponding to the current virtual training environment.
The first guidance mode or the second guidance mode of the virtual reality platform can be chosen arbitrarily according to the actual situation of the patient, and the main difference between the two modes is that the patient is mainly located at which position on the ship hull in most of the time in the virtual training environment. The first guidance mode enables the patient to obtain a wider visual field and better exercise feeling than the second guidance mode, but the rehabilitation training guidance effect is relatively weak, and the second guidance mode can obtain better rehabilitation training guidance effect even though other second virtual three-dimensional models exist in the visual field of the patient all the time. Therefore, medical care can select a first guidance mode for patients with strong learning ability or good cognitive response and the like in the rehabilitation training process, and select a second guidance mode for patients with weak learning ability or certain obstacles in cognitive response and the like in the rehabilitation training process.
In the process that a patient uses the rehabilitation system, the virtual reality platform monitors the movement of the patient, when the movement of the patient is not consistent with the set movement, a first movement guiding condition is triggered, and the patient is guided to adjust the movement by introducing a following positioning point into the virtual training environment.
The discrepancy between the patient movement and the set movement referred to in the present application may mean that the movement deviation degree of the patient movement reaches a first preset deviation degree threshold value. The degree of motion deviation may be a quantification of the degree to which patient motion differs from a standard motion, which may be calculated from an assessment of the magnitude, angle, strength or speed of the patient motion.
When entering an active training rehabilitation mode, displaying the virtual training environment constructed by the patient to the patient; the patient is guided in an adjustment action by synchronously or asynchronously establishing a first virtual three-dimensional model corresponding to a part of the real objects and a second virtual three-dimensional model of the virtual objects corresponding to the current virtual training environment, and/or by replicating at least one virtual three-dimensional model in the current virtual training environment under non-contact interaction between the patient and the virtual training environment.
The anchor point in the tracking anchor point may be a position point to which the upper limb should reach in the standard motion with respect to the current motion of the first virtual three-dimensional model, and the position point may be a position corresponding to a palm portion, a wrist portion, an elbow portion, and the like in the upper limb. Following in a following localization point may mean that the localization point may move accordingly as the patient's motion changes. The method comprises the steps that a complete oar pulling period comprises a oar lifting water inlet stage, an oar pulling stage, an oar pressing and pushing starting stage, an oar pushing stage and other single-orientation actions, different following positioning points are correspondingly arranged aiming at different single-orientation actions, and the following positioning points change along with the change of the patient from one action to the next action, so that the following positioning points in a display picture are limited and are definitely specified, and misleading can not be caused to the patient. Therefore, the patient can be guided to adjust the action timely and effectively under the condition that the action of the patient is not consistent occasionally. The tracking positioning point can have a meteoric trail extending to the upper limb of the patient for displaying a task action path indicating the movement of the patient, and the patient can clarify the following direction of the movement of the patient due to the directional characteristic of the meteoric trail.
And converting the second virtual three-dimensional model into a frame structure and selecting at least one structure point on the frame structure as a following positioning point. Under the condition that the virtual reality platform is in the first guide mode and only the first virtual three-dimensional model is established in the current virtual training environment, the following positioning points are introduced into the virtual training environment in a mode of establishing the second virtual three-dimensional model. The second virtual three-dimensional model with large data loading capacity and delayed response built by the virtual reality platform is not completely loaded into the virtual training environment, but is simplified into a following positioning point with smaller data processing capacity and faster response speed, and standard action amplitude or standard action path and the like are converted into visual information which can be intuitively observed and compared by a patient by utilizing the following positioning point. If the mode of directly introducing the virtual three-dimensional model is adopted, the virtual three-dimensional model is introduced once when the action of the patient is inconsistent, and no matter whether the correct standard action is displayed in the virtual three-dimensional environment or not, the patient is difficult to completely follow the correct standard action, namely the patient is easy to frequently generate so-called action non-specification, so that the system needs to repeatedly load and hide different prompt models, the data loading capacity is large, and the response delay is increased. Correspondingly, the data processing amount required by loading the following positioning points and the influence on the rehabilitation process of the patient are very small, so that the situation that the system needs to repeatedly load and hide different prompt models for many times due to the fact that the action of the patient is not standard frequently is avoided.
The interaction between the second virtual three-dimensional model and the first virtual three-dimensional model may be a process in which the motion parameters of the second virtual three-dimensional model are influenced by the motion parameters of the first virtual three-dimensional model, and the like.
The virtual scene implementation unit may regulate the motion phase and the motion speed performed by the second virtual three-dimensional model based on training data related to the patient's upper limb movement. The motion phase can be each phase in a single oar pulling period, the same motion phase is kept by regulating the first virtual three-dimensional model and the second virtual three-dimensional model, and the motion speed of the second virtual three-dimensional model is regulated and controlled, so that the condition that the standard motion is too fast or too slow relative to the adaptability of the patient is avoided, the difference between the motion of the patient and the standard motion can be greatly observed by the patient, and the posture of the upper limb can be quickly and effectively regulated. Under the regulation and control of the virtual scene realization unit, the motion time difference between the second virtual three-dimensional model and the first virtual three-dimensional model does not exceed two motion stages, and the motion speed of the second virtual three-dimensional model is higher than that of the first virtual three-dimensional model but does not exceed a preset speed threshold value, so that the patient can follow and the immersion feeling of the patient can be enhanced. And when the motion time difference between the second virtual three-dimensional model and the first virtual three-dimensional model reaches two motion phases, instructing the second virtual three-dimensional model to repeatedly show the standard motions corresponding to the two motion phases until the first virtual three-dimensional model completes the two motion phases. Repeated presentation may refer to the second virtual three-dimensional model repeating only the standard actions corresponding to the two motion phases.
The following positioning points can be obtained by copying and projecting the second virtual three-dimensional model to the virtual training environment. Under the condition that the virtual reality platform is in the second guide mode and the first virtual three-dimensional model and the second virtual three-dimensional model are established in the current virtual training environment, copying and projecting are carried out on the virtual training environment according to the existing second virtual three-dimensional model so as to introduce the following positioning points. The copying projection of the second virtual three-dimensional model is actually copying projection of a frame structure of the second virtual three-dimensional model, and at least one structure point on the frame structure is selected, so that the following positioning point is obtained.
Replicating a projection may refer to replicating the projection in a manner that establishes a synchronized relationship between the replicator and the replicator. And establishing a synchronous association relation between the tracking positioning point and the second virtual three-dimensional model while copying. Based on this, the following anchor point may maintain a dynamic correspondence between it and the second virtual three-dimensional model.
Replicating a projection may refer to replicating the projection in a manner that establishes a synchronous associative relationship between the replicator and the virtual training environment. And establishing a synchronous association relation between the tracking positioning point and the virtual training environment while projecting. Based on the above, the tracking positioning points can keep the relative positions in the virtual training environment, and guide the patient to move along the path meeting the rehabilitation training requirement.
In the process that a patient uses the rehabilitation system, the virtual reality platform monitors the movement of the patient, when the number of times that the movement of the patient is inconsistent with the set action reaches a first preset number threshold value, a second action guiding condition is triggered, and the patient is guided to adjust the action by performing transparency change overlapping on the two virtual three-dimensional models.
The two virtual three-dimensional models may refer to a second virtual three-dimensional model and a third virtual three-dimensional model.
The second virtual three-dimensional model may always refer to a virtual object that performs a standard action corresponding to the current virtual training environment. The third virtual three-dimensional model may be obtained by a replica projection of the first virtual three-dimensional model. The third and the first virtual three-dimensional model are at different positions in the display.
In the case of overlapping transparency changes of two virtual three-dimensional models, the patient can observe them simultaneously as a motion map at a first viewing angle and as a motion map at a third viewing angle. Because the patient can observe its self motion condition from first visual angle and third visual angle simultaneously, the patient can adjust the action of oneself better more effectively to realize more effective rehabilitation training effect.
The transparency changes are overlapped, wherein the transparency can refer to the visual degrees corresponding to the two virtual three-dimensional models in the display interface respectively. The transparency change may refer to the transparency of the two virtual three-dimensional models in the display interface being not fixed but dynamically variable. Overlapping may refer to the two virtual three-dimensional models in the display interface being in the same relative spatial position in the virtual training environment. For example, the main human body trunks of the two virtual three-dimensional models are fused into a whole, and the two virtual three-dimensional models can be respectively and correspondingly displayed through different upper limb movements.
Under the overlapping setting of the transparency change, the virtual reality platform can selectively highlight a specific action which is not in accordance with the set action in the continuous motion by regulating and controlling the transparency change of the second virtual three-dimensional model and the third virtual three-dimensional model. Specifically, after triggering the second or third motion guidance condition, the patient may or may not be able to synchronously follow the virtual object performing the standard rowing motion, which may cause a certain time difference between the motions of the patient and the virtual object, and in any case, there is a problem that the posture of the upper limb of the patient is wrong or the upper limb of the patient is not stretched in place.
In contrast, in the virtual reality platform proposed in the present application, under the condition that the second or third motion guidance condition is triggered, when it is monitored that the patient training data conforms to the standard motion, the transparency of at least part of the second virtual three-dimensional model is increased. At the moment, the action of the patient meets the rehabilitation training requirement, the dependence on the second virtual three-dimensional model is weak, the visual degree of at least part of the second virtual three-dimensional model can be reduced based on the dependence, and unnecessary interference to the patient caused by the staggering of the two three-dimensional models is avoided.
Preferably, the transparency of the second virtual three-dimensional model corresponding to the virtual object is increased if the patient is in synchronization with the virtual object. Preferably, the transparency of the second virtual three-dimensional model is gradually increased in a direction back to the upper limb movement if the patient and the virtual object are not in perfect synchronization. The two virtual three-dimensional models in the synchronous state are overlapped or almost overlapped, and the two virtual three-dimensional models in the incomplete synchronous state have a certain time difference and are not completely overlapped with each other. The incomplete synchronization state is different from the action inconsistency, and both the action consistency and the action inconsistency exist in the synchronization state or the incomplete synchronization state.
When the fact that the training data of the patient do not accord with the standard movement is monitored, a first area is defined based on upper limb movement deviation between the second virtual three-dimensional model and the third virtual three-dimensional model, a second area is defined based on the first area and the movement direction of the upper limbs, a third area is defined based on the first area and the second area, and the definition corresponding to the first area to the third area is sequentially decreased.
At least one of the first to third regions may be defined in an irregular shape.
The first region is demarcated based on upper limb movement deviation between the second and third virtual three-dimensional models. According to the upper limb action deviation between the second virtual three-dimensional model and the third virtual three-dimensional model, a motion path/task action path which is required to be adjusted and corresponds to the patient can be obtained, the motion path can be completed by indicating at least one upper limb joint point of the patient to move, and the area where the at least one upper limb joint point corresponding to the motion path is located is divided into a first area. The upper limb joint points of the second virtual three-dimensional model and the third virtual three-dimensional model which correspond to each other are simultaneously reserved in the first area, so that a patient can directly know the upper limb joint points which need to be adjusted by observing the first area. The upper limb joint point may be, for example, at a wrist joint or at an elbow joint.
When the deviation of the upper limb movement corresponds to at least two movement paths, for example, the wrist joint needs to be adjusted to meet the standard movement of the forearm and the elbow joint needs to be adjusted to meet the standard movement of the forearm, the first area is preferably defined according to the elbow joint, and then the first area is re-defined according to the wrist joint. Can better adapt to the operation habit of the human body.
And corresponding the first preset shape to the area where the motion path is located, outwards expanding or inwards contracting the outer edge of the first preset shape to enable the first area to further contain upper limb joint points of the two virtual three-dimensional models, which correspond to each other, and determining the first area based on the expanded outer edge, so that the required contents are all contained, and unnecessary other picture contents are reduced.
The first predetermined shape may be a preset shape, for example, a regular circle, and the first predetermined shape may be an irregular circle adjusted by counting and analyzing the shape of the first region defined each time, so as to more closely match the division of the first region. The first predetermined shape may be a shape corresponding to a degree of deviation of the upper limb movement determined from a number of preset shapes.
A second region is delineated based on the first region and the upper limb motion orientation. In order to ensure that the patient can continuously perform rehabilitation actions while adjusting the posture, the outer edge of the first area is used as a second preset shape of the second area, the outer edge of the second preset shape is expanded outwards so that the second area also contains the area where the upper limb moves towards the corresponding upper arm or lower arm, and the second area is determined based on the expanded outer edge.
The outer edge of the second region serves as a third predetermined shape of the third region, and the outer edge of the third predetermined shape is flared outwardly to delimit the third region. The third area is not limited to its desired delineation.
The definition corresponding to the first to third regions decreases sequentially. The reduction of sharpness may be achieved by increasing the blur level of the corresponding region.
The training data may be different from the standard motion, and may be a case where the training data of the current patient is different from the standard motion corresponding to the position of the current patient. The training data may be different from the standard motion, or may be a case where the training data of the patient at a certain time in the virtual training environment is different from the execution motion of the second virtual three-dimensional model. Based on this, avoid the system to appear because patient's velocity of motion is slower, fail to follow up virtual object and lead to being reminded frequently the condition that the action has the mistake, the patient can be based on oneself physical stamina and experience autonomic regulation velocity of motion, is favorable to promoting experience and feels.
The second virtual three-dimensional model is executed at a preset speed in such a way that it always remains in the same motion phase as the first virtual three-dimensional model. The action speed of the second virtual three-dimensional model is often faster than that of the patient, so that standard actions to be executed can be effectively displayed to the user, the standard actions and the standard actions are always kept in the same motion phase, the action time difference between the standard actions and the motion phase is limited, and the patient can better finish follow-up exercise one by one.
Often indicate a certain whole continuous motion that the patient accomplished among the prior art, for example after the patient accomplished a period of pulling oar, the action that the suggestion patient was accomplished does not conform to the standard or only indicates its action range not enough, and it can't specifically learn which action has the problem in this period of pulling oar to the patient, only can carry out next period of pulling oar according to the understanding of oneself to the suggestion, is unfavorable for patient's recovery. In contrast, the rehabilitation system provided by the application adopts the transparency change overlapping setting, the motion prompt can be embodied into single actions such as a paddle lifting and entering stage, a paddle pulling stage, a paddle pressing and pushing starting stage, a paddle pushing stage and the like in a paddle pulling period, so that a patient can clearly know the actions of the patient with differences, and the differences between the actions of the patient and the standard actions are visually presented to the patient end through the transparency change overlapping setting, so that the patient can adjust the actions of the patient in a quantifiable manner.
The virtual reality platform can regulate transparency change overlap based on non-contact interaction between the patient and the virtual training environment. The non-contact interactive operation refers to a process in which the patient does not contact the display screen but virtually maps the corresponding motion it makes according to the picture on the display screen by means of the sensors carried on the robot body 6. Non-contact interactive operation may refer to a course of motion, which may refer to a single paddle cycle, rather than a single action.
If the virtual reality platform is started in the first guiding mode, when a second action guiding condition is triggered, at least a second virtual three-dimensional model and a third virtual three-dimensional model are built according to the training data, and the relative spatial positions of the first virtual three-dimensional model in the virtual training environment are switched simultaneously, so that the second virtual three-dimensional model and the third virtual three-dimensional model can be located at the relative spatial positions which can be observed by the patient in the virtual training environment.
Switching the relative spatial position of the first virtual three-dimensional model in the virtual training environment may refer to: in the first guiding mode, the first virtual three-dimensional model corresponding to the patient is in the bow position, and after the relative spatial position is switched, the first virtual three-dimensional model is converted into a position other than the bow, namely the middle or the stern. The relative spatial position of the first virtual three-dimensional model in the virtual training environment is left out, so that the first virtual three-dimensional model is used for building a second virtual three-dimensional model and a third virtual three-dimensional model. In the upper limb rehabilitation system provided by the prior art, a standard motion demonstration video is partially and directly inserted into the current virtual training environment, a patient can only pause to watch the video, the rehabilitation of the patient is interrupted, the use feeling of the patient is directly influenced, the patient can only imitate the standard motion from a sense organ and cannot determine whether the motion meets the requirement, and further, the training is interrupted and the standard motion demonstration video is inserted for many times, so that the rehabilitation training is seriously influenced.
If the virtual reality platform is started in the second guiding mode, when a second action guiding condition is triggered, the relative spatial position of the first virtual three-dimensional model in the virtual training environment is maintained, and a newly constructed third virtual three-dimensional model is introduced into the virtual training environment at least according to the training data.
Maintaining the relative spatial position of the first virtual three-dimensional model in the virtual training environment may refer to: in the second guiding mode, the first virtual three-dimensional model corresponding to the patient is positioned in the middle of the ship or at the stern, the current position of the first virtual three-dimensional model does not need to be changed, and the newly-built third virtual three-dimensional model is directly introduced. Because the third virtual three-dimensional model is formed by copying and projecting according to the first virtual three-dimensional model, the model itself can be obtained without processing data, so that the problem of unsmooth and unsmooth picture due to excessive burden on system operation is avoided, and the patient can observe the motion condition of the patient from the first visual angle and the third visual angle at the same time, thereby being beneficial to improving the rehabilitation training effect.
In the process that a patient uses the rehabilitation system, the virtual reality platform monitors the movement of the patient, when the times that the movement deviation degree of the movement of the patient reaches a second preset deviation degree threshold value exceeds a second preset time threshold value, a third movement guiding condition is triggered, transparency change overlapping is carried out on the two virtual three-dimensional models, and a following positioning point is introduced into a virtual training environment in a combined mode, so that the patient is guided to adjust the movement. The two virtual three-dimensional models are overlapped in transparency change and the following positioning points are introduced, so that for a patient, the action required to be adjusted and how to adjust the action can be better defined, and the rehabilitation training effect can be further improved.
If the virtual reality platform is started in the first guiding mode, when a third action guiding condition is triggered, a second virtual three-dimensional model is built at least according to training data, the relative spatial view angle of the first virtual three-dimensional model in the virtual training environment is switched at the same time, and the patient is guided to adjust the action by performing transparency change overlapping on the first virtual three-dimensional model and the second virtual three-dimensional model and introducing a following positioning point into the virtual training environment in a combined manner. The relative spatial perspective may refer to a first perspective or a third perspective observed by the patient, and here, switching the relative spatial perspective may refer to switching the first perspective of the patient to the third perspective by rotating the first virtual three-dimensional model to view in the virtual training environment. For example, the ship body in the virtual training environment and the three-dimensional model personnel on the ship are turned to the side, preferably to the side corresponding to the affected upper limb of the patient, so that the motion condition of the affected upper limb can be better observed. Under the condition that the third action guiding condition is triggered, the first visual angle is cancelled and only the third visual angle is reserved in the virtual training environment, and the patient can visually observe the difference between the affected upper limb and the standard action. And the standard degree of the motion trail of the patient can be enhanced by combining the following positioning points. The following positioning points comprise the highest point and the lowest point which are required to be reached by the single-direction movement, and also comprise a plurality of positioning points in the movement track of the single-direction movement, so that the patient can control the effective bending angle or the effective stretching angle of the upper limb during movement, and a better rehabilitation training effect is realized.
If the virtual reality platform is started in the second guiding mode, when a third action guiding condition is triggered, the relative space view angle of the first virtual three-dimensional model in the virtual training environment is switched, and the patient is guided to adjust the action by performing transparency change overlapping on the first virtual three-dimensional model and the second virtual three-dimensional model and introducing a following positioning point into the virtual training environment in a combined manner. The first and second virtual three-dimensional models are already constructed in the second guidance mode, and based on this, when the third action guidance condition is triggered, the first and second virtual three-dimensional models that have been constructed can be overlapped in transparency change.
The robot body 6 is used for assisting the arm of the patient to carry out rehabilitation training and collecting training data in the training process. The training data mainly refers to data corresponding to the motor 301 and the sensors included in the robot body 6 during the rehabilitation training process. Preferably, the robot body 6 may be an existing upper limb rehabilitation device. The upper limb rehabilitation device may for example be an indoor rowing machine.
As a preferred embodiment, the robot body 6 may be a robot based on motion mapping and virtual reality proposed by a trajectory curve fitted according to the motion of the upper limbs. Through the cooperation of different motor 301 control strategies connection rod structure that this application provided, this robot 6 can help patient's upper limbs to carry out the motion, accomplishes the rehabilitation training that active and passive combines.
The robot body 6 mainly includes a motor mounting block 3 and a linkage 4.
The linkage 4 is fixedly arranged on the mounting table 5 in an adjustable manner. The linkage 4 is fixedly arranged on the mounting table 5 by means of at least one support rod 1. Each support rod 1 is vertically assembled on the mounting table 5, and the linkage 4 is slidably connected to the rod body of the support rod 1. The operator can adjust the height of the linkage 4 on the mounting table 5 up and down to better accommodate different patients.
The linkage 4 includes an AB lever 403, a BC lever 402, and a CD lever 404 rotatably connected to each other in this order. An arm support 401 is fixedly assembled on the BC pole 402, and the arm support 401 is used for placing the lower arm of the upper limb of the patient. The shape of the arm rest 401 is adapted to the shape of the human forearm, which resembles an elongated U-shaped structure.
The axis of rotation of the AB lever 403 about the BC lever 402 is coplanar with the axis of rotation of the CD lever 404 about the BC lever 402. Whereby the robot can support multi-pose motion of the patient's forearm in the plane of the BC rod 402.
A motor mounting block 3 is provided on the AB lever 403 at the end corresponding to the fulcrum a. The output of the motor 301 in the motor mounting block 3 is connected to the corresponding end of the AB lever 403 at the a fulcrum. That is, the control of the rotation of the AB lever 403 can be realized by the control motor 301, and the movement of the forearm of the patient on the BC lever 402 can be synchronously driven due to the linkage relationship among the AB lever 403, the BC lever 402 and the CD lever 404.
The motor mounting block 3 is stably assembled above the mounting table 5 through the multi-axis arm set 14. One end of the multi-axis arm set 14 is slidably connected with the supporting rod 1 to realize the height adjustment of the connecting rod set 4 on the installation table 5. The other end of the multi-axis arm group 14 is connected with a motor supporting plate 303 in the motor mounting block 3 to realize stable support of the motor mounting block 3.
The multi-axis arm set 14 comprises at least one rotating shaft and at least one axis arm 15 which are sequentially and rotatably connected with each other. Two adjacent shaft arms 15 can be rotatably connected with each other through a rotating shaft arranged at the end part of the rod body. A rotating shaft connecting block is slidably provided on the support rod 1, a rotating shaft connecting block is provided below the motor support plate 303, and the two shaft arms 15 at both ends of the multi-shaft arm group 14 are rotatably connected to a rotating shaft connecting block, respectively. The shaft arms 15 are parallel to each other about the shaft center lines that rotate relative to each other.
Two supporting rods 1 are arranged on the mounting table plate 5, and the motor mounting block 3 or the AB rod 403 and the CD rod 404 are respectively connected to one supporting rod 1 through a multi-axis arm group 14, so that the supporting rods 1 can adjustably support the connecting rod group 4.
The weight of the motor 301 in the motor mounting block 3 is mainly transferred to the support rod 1 through the multi-axis arm set 14, and the movement of the upper limbs of the patient is not influenced by the weight of the motor 301, so that the movement condition of the upper limbs of the patient can be evaluated more accurately, and the rehabilitation training effect is guaranteed.
A bearing 406 with a flange and a locking screw is adopted at the D pivot of the CD rod 404 to connect the inner shaft and the outer shaft, so that the transmission is simple and the structure is stable.
Only one motor installation block 3 is arranged in the robot body 6, the fitting of the motion trail of the upper limbs can be accurately finished by the operation of one motor 301, and the robot is simple and convenient to operate, low in cost and suitable for popularization.
A specific mounting manner at the motor mounting block 3 is explained as follows. The motor pallet 303 is mounted on the positioning seat through at least one positioning hole, for example, 8 positioning holes. The positioning seat is arranged on one end part of the multi-axis arm set 14. The motor 301 is positioned and installed through at least one through hole, for example, 4 through holes, on the motor connecting plate 302. The motor 301 is then placed on the motor tray 303. Finally, the motor supporting plate 303 is connected with the motor connecting plate 302 through at least one long hole, for example 2 long holes, so that the installation and positioning of the motor 301 part are completed.
A specific mounting manner at the link group 4 is explained as follows. The protruding shaft at the B-end of the AB lever 403 extends into the inner ring of the bearing 406 with flange and locking screw and is fastened by two locking screws. The outer ring flange of the same bearing 406 is connected to the B end of the BC rod 402 by at least one set of bolt and nut, e.g. 3 sets of bolt and nut. The C-end of BC rod 402 is connected to the outer ring flange of the other bearing 406 in the same manner. The inner race of bearing 406 is then secured to the protruding shaft at the end of CD bar 404C by a locking screw. The D-end of CD rod 404 is flanged to the outer race of bearing 406 with a flange and locking screw. The inner race of the bearing 406 is secured to the projecting shaft on the rod set web 405. The arm rest 401 is mounted on the BC pole 402 by at least one set of bolt nuts, for example 2 sets of bolt nuts. Two sections of elastic bands can be sewn at two groups of square holes on the arm support 401 for auxiliary fixation.
The connection between the linkage 4 and the motor mounting block 3 is the key connection between the output shaft of the motor 301 and the end hole of the AB rod 403A. The linkage 4 is connected to the support rod 1 through at least one mounting positioning hole, for example, 8 mounting positioning holes, in the linkage connecting plate 405.
When in use, the motor 301 is driven to rotate by 1 48V power supply, 1 pulse controller and 1 driver. Through key connection, the output shaft of the motor 301 transmits the motion to the AB rod 403, and the AB rod 403 is used as a crank to complete the complete circular motion. The BC pole 402 and the CD pole 404 are driven to move along a certain track synchronously. The trajectory at arm rest 401 is a fit trajectory for the desired upper limb movement.
As a preferred embodiment, the main structural dimensions and the main mounting dimensions of the robot body 6 proposed in the present application will be described below. The center distance between the two ends of the AB lever 403 is 129.60mm. The center distance between both ends of the BC pole 402 was 187.00mm. The center distance between the two ends of the CD bar 404 is 313.80mm. The arm support 401 is 150mm long, 90mm in large diameter and 70mm in small diameter. The diameter of the inner ring of the bearing 406 with the flange and the locking screw is 22mm, and the connection diameter of the flange of the outer ring is 60mm. The midpoint of the end of the AB lever 403A is spaced apart from the center of the end of the CD lever 404D by a vertical distance of 120.32mm and a horizontal distance of 290.85mm. The horizontal inclination angle of the arm rest 401 connected to the BC-bar 402 is 45.5 °. The vertical height of the middle point of the D end of the CD rod 404 from the desktop of the installation desk 5 is 165mm.
The active and passive control platform is used for sending control instructions to the robot body 6 and recording training data. The active and passive control platform mainly comprises a driving system 7, a single chip microcomputer 9, a sensor group 8, an encoder 11, an upper computer 10 and a display group 2.
At least two modes are arranged in the active and passive control platform, and the active and passive control platform at least comprises a passive rehabilitation mode and an active rehabilitation mode. In the passive rehabilitation mode, according to a preset passive rehabilitation scheme and a set angular speed value in the passive rehabilitation scheme, the controller sends a motor 301 pulse instruction to the driver, the driver outputs a set pulse to drive the stepping motor 301 to rotate at a certain angular speed and drive the connecting rod group 4 to move, and the arm of the patient stretches into the arm support 401 and moves along with the connecting rod group 4 to complete the rehabilitation process. The passive rehabilitation scheme may refer to a preset correspondence between angular velocity and time.
The sensor group 8 may include at least one of an angle sensor, a force sensor, and a speed sensor provided on the arm rest 401 of the robot body 6. The sensor group 8 can detect the pressure between the upper limb of the patient and the arm support 401, the movement track of the upper limb of the patient, the movement speed of the upper limb of the patient and the like, and transmit data to the single chip microcomputer 9. The pressure value obtained by conversion of the singlechip 9 is transmitted to the upper computer 10 through a USB serial port, and the upper computer 10 displays the pressure value, the upper limb rotating speed and other data on a medical interface in real time.
The display set 2 may include a three-dimensional scene display set 13 and a medical interface display set 12. The virtual reality platform is respectively connected with the three-dimensional scene display group 13 and the medical interface display group 12. The display group 2 can be stably adjustably mounted on the mounting table 5 by means of the support bar 1 provided on the mounting table 5 and at least one multi-axis arm 14. The three-dimensional scene display group 13 is used for displaying the selected virtual scene for the patient using the arm support 401 to watch during upper limb rehabilitation, and guiding the patient to perform standard and effective rehabilitation training. The medical interface display unit 12 is used for displaying the data collected by the plurality of sensors disposed in the linkage 4, calculating and analyzing the data, and the like, so that the medical staff can more clearly and clearly define the rehabilitation training condition of the patient.
As a preferred embodiment, the three-dimensional scene display group 13 may be a display which is arranged facing the patient side and can show a three-dimensional scene picture, and the virtual reality platform may show a three-dimensional scene picture which can change along with the movement of the patient on the display. Preferably, the three-dimensional scene display set 13 may be a head-mounted display device employing VR technology, AR technology or MR technology. VR technology (Virtual Reality), generally referred to as Virtual Reality technology, uses a VR head-mounted display device to block the human vision and hearing from the outside world and to guide the patient to create a sensation of being in a Virtual environment. VR head mounted display device is more immersive than conventional display and is more scene simulation effect better. In addition to VR such pure virtual digital picture technology, virtual digital picture of AR (Augmented Reality) + naked eye Reality technology, or digitized Reality of MR (media Reality) + virtual digital picture technology may be adopted.
In the active rehabilitation mode, mainly the arms of the patient are actively moved, and the linkage 4 associated with the arm support 401 is driven by the upper limbs of the patient. The pressure sensor and the encoder 11 can detect training data such as the pressure value of the upper limb and the angular velocity value of the upper limb driving the linkage 4 to move. The angular speed of the upper limb rotation is obtained by converting counting pulses obtained by the encoder 11 in unit time through the singlechip 9 and displayed on a medical interface. The involvement of the active motor will strengthen the central nervous system and improve the rehabilitation effect of the patient.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of this disclosure, may devise various solutions which are within the scope of this disclosure and are within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents. The present description contains several inventive concepts, such as "preferably", "according to a preferred embodiment" or "optionally", each indicating that the respective paragraph discloses a separate concept, the applicant reserves the right to submit divisional applications according to each inventive concept.