CN112233771A - Knee joint rehabilitation training method, storage medium, terminal and system - Google Patents

Knee joint rehabilitation training method, storage medium, terminal and system Download PDF

Info

Publication number
CN112233771A
CN112233771A CN202011218137.6A CN202011218137A CN112233771A CN 112233771 A CN112233771 A CN 112233771A CN 202011218137 A CN202011218137 A CN 202011218137A CN 112233771 A CN112233771 A CN 112233771A
Authority
CN
China
Prior art keywords
preset
patient
training
virtual image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011218137.6A
Other languages
Chinese (zh)
Inventor
杨云鹏
谢锦华
孙野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Lanruan Intelligent Medical Technology Co ltd
Wuxi Lanruan Intelligent Medical Technology Co Ltd
Original Assignee
Shenyang Lanruan Intelligent Medical Technology Co ltd
Wuxi Lanruan Intelligent Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Lanruan Intelligent Medical Technology Co ltd, Wuxi Lanruan Intelligent Medical Technology Co Ltd filed Critical Shenyang Lanruan Intelligent Medical Technology Co ltd
Priority to CN202011218137.6A priority Critical patent/CN112233771A/en
Publication of CN112233771A publication Critical patent/CN112233771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention belongs to the field of rehabilitation training equipment, and particularly relates to a knee joint rehabilitation training method and system. A knee joint rehabilitation training method applying a mixed reality wearable device, the method comprising: acquiring a virtual image corresponding to the auxiliary training equipment; based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect; acquiring a signal generated when a patient contacts the auxiliary training device at a preset position; the signal triggers the preset triggering position corresponding to the preset position on the virtual image to switch the preset scene, the virtual image of the training equipment is obtained by applying a mixed reality technology, and the scene change provided by the virtual image is sensed by a patient in a real training environment by adopting a mode of combining the virtual image and the real equipment, so that the interestingness is increased for a boring and repeated training process, and the enthusiasm of the rehabilitation training of the patient is improved.

Description

Knee joint rehabilitation training method, storage medium, terminal and system
Technical Field
The invention belongs to the field of rehabilitation training equipment, and particularly relates to a knee joint rehabilitation training method and system.
Background
Rehabilitation training is an important means of rehabilitation medicine, and mainly enables the affected limb of a patient to recover the normal self-care function through training, and enables the patient to recover as much as possible by using a training method, so that the treatment effect is achieved.
The artificial tumor type knee joint prosthesis replacement is a malignant bone tumor limb protection operation mode, but the due effect of the operation cannot be achieved only by supporting the successful operation on the operation technology without effective rehabilitation training. For knee joint replacement, functional exercise and surgery have the same important role, which is related to the function and mobility of knee joints in the future.
The key point of the later-period rehabilitation exercise after the knee joint replacement for 6-8 weeks is to gradually recover the load bearing capacity of the affected limb and strengthen the endurance and coordination capacity of the patient. The exercise mode that this stage was commonly used is upper and lower step training, and this training mode early mainly relies on the walking stick and the lower limbs of good side to support upper and lower step, and the affected limb is from not bearing a burden transition to partial heavy burden gradually, and good side is first up during the training, and the affected side is first down, treats that the patient adapts to the back and reduces the reliance of walking stick gradually, can break away from the walking stick finally, independently walks, realizes recovered effect.
At present, the rehabilitation patients are guided by doctors, and the rehabilitation exercises of going up and down steps are completed by using auxiliary training equipment of hospitals; due to the postoperative body pain and the repetition, monotony and boring of the training action, the patient is easy to have boring and conflicting emotions in the training process, which is not beneficial to achieving the expected rehabilitation effect and prolonging the rehabilitation training time.
Disclosure of Invention
In order to solve the problems, the invention provides a knee joint rehabilitation training method, a storage medium, a terminal and a system, which increase the interest of the training process and further improve the rehabilitation training effect and efficiency.
The invention is realized by the following technical scheme:
a knee joint rehabilitation training method applying a mixed reality wearable device, the method comprising:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
A storage medium having stored therein a plurality of instructions adapted to be loaded and executed by a processor:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
A knee joint rehabilitation training terminal comprises: a mixed reality wearable device, the mixed reality wearable device further comprising a processor adapted to implement instructions; and a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
A knee rehabilitation training system, comprising:
the knee joint rehabilitation training terminal;
the knee joint rehabilitation training terminal is connected with the action information processing device and is used for acquiring action information processed by the action information processing device;
the action information processing device is connected with the action information acquisition device and used for storing and processing the action information of the patient acquired by the action information acquisition device;
the action information acquisition device is arranged on the training medium and is used for acquiring signals generated when the patient contacts with the training medium.
Advantageous effects
1. The invention uses the mixed reality technology to obtain the virtual image of the training equipment, and adopts the mode of combining the virtual image and the real equipment to lead the patient to feel the scene change provided by the virtual image in the real training environment, thereby increasing the interest of the boring and repeated training process and improving the enthusiasm of the rehabilitation training of the patient.
2. According to the invention, a pressure signal comparison function is added in the training process, different virtual preset scenes are switched for the patient according to the comparison result, different virtual preset scenes are correspondingly switched according to different comparison results, and different virtual preset scenes correspond to the feedback of the training effect one by one, so that the patient can know whether the training effect is improved in each stage through the visual and intuitive virtual preset scenes, thereby increasing the achievement sense of the rehabilitation of the patient, stimulating the enthusiasm of the patient in exercise, improving the interestingness and the challenge of the rehabilitation training, and being beneficial to accelerating the realization of the expected rehabilitation effect.
3. For the rehabilitation patient, the invention has simple operation and convenient application, can not increase extra exercise load for the rehabilitation patient, and is convenient for application and popularization.
Drawings
1. FIG. 1 is a flow chart of the method of the present invention;
2. FIG. 2 is a schematic diagram of a real auxiliary training device;
3. FIG. 3 is a schematic three-dimensional visualization image of the auxiliary training device;
4. FIG. 4 is a schematic diagram of a pressure signal comparison process according to the present invention;
5. FIG. 5 is a flow chart of the pressure values generated when the patient makes contact with the training aid at a current predetermined location compared to the pressure values generated when the patient makes contact with a previous predetermined location;
6. fig. 6 is a schematic structural diagram showing regular changes when a second predetermined scene on the three-dimensional visual image of the auxiliary training device is compared with a first predetermined scene when the second pressure value is greater than the first pressure value;
7. fig. 7 is a schematic structural diagram showing regular changes when a second predetermined scene on the three-dimensional visual image of the auxiliary training device is compared with a first predetermined scene when the second pressure value is smaller than the first pressure value;
8. FIG. 8 is a flow chart comparing pressure values generated when a patient makes contact with the assistive training device at a current predetermined location with initial pressure values;
9. fig. 9 is a schematic structural diagram illustrating that when the second pressure value is greater than the initial pressure value, the second predetermined scene is significantly different from the initial predetermined scene on the three-dimensional visual image of the auxiliary training device;
10. fig. 10 is a schematic structural diagram illustrating that when the second pressure value is smaller than the initial pressure value, the second predetermined scene is obviously different from the initial predetermined scene on the three-dimensional visual image of the auxiliary training device;
11. FIG. 11 is a schematic structural diagram illustrating that a second predetermined scene is the same as the initial predetermined scene on the three-dimensional visual image of the auxiliary training device when the second pressure value is the same as the initial pressure value;
12. fig. 12 is a schematic structural diagram of a mixed reality wearable device;
13. FIG. 13 is a schematic flow chart of a knee joint rehabilitation training system implementation;
14. FIG. 14 is a schematic structural view of a training medium disposed on a rehabilitation training stage;
15. fig. 15 is a schematic view of the arrangement of training media on a flat training aid.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The steps of the method are described with reference to fig. 1, fig. 2, fig. 3 and fig. 12:
fig. 2 is a schematic structural diagram of a real auxiliary training device, fig. 3 is a schematic structural diagram of a three-dimensional visualized image of an auxiliary training device, fig. 12 is a schematic structural diagram of a mixed reality wearable device, fig. 2 shows the predetermined position 1, fig. 3 shows the predetermined trigger position existing in the three-dimensional visualized image of the auxiliary training device in one-to-one correspondence with the predetermined position 11, and fig. 12 shows the mixed reality wearable device 5;
step S1: acquiring a virtual image corresponding to the auxiliary training equipment;
in step S1, the patient wears the mixed reality wearable device 5, and a virtual image is preloaded in the mixed reality wearable device 5, where the virtual image is obtained by performing 1: 1, reconstructing a three-dimensional visual image;
step S2: based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
in step S2, a mixed reality technology is applied to match the position of the three-dimensional visual image of the auxiliary training device in the mixed reality wearable device 5 with the position of the real auxiliary training device, and a visual effect of overlapping the three-dimensional visual image of the auxiliary training device with the real auxiliary training device is presented in the mixed reality wearable device 5 worn by the patient, so that the patient can see the scene change displayed on the three-dimensional visual image of the auxiliary training device without affecting the exercise action of the patient in the real environment;
the method for matching the position of the three-dimensional visual image of the auxiliary training equipment with the position of the real auxiliary training equipment can be that the auxiliary training equipment is provided with a positioning identifier which can be identified by the mixed reality wearable equipment 5 and stores the position information of the real auxiliary training equipment, a patient wears the mixed reality wearable equipment 5 to scan the positioning identifier to obtain the position information of the positioning identifier, the three-dimensional visual image of the auxiliary training equipment and the real auxiliary training equipment are configured to obtain the same position information, and the position matching is completed;
step S3: acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
in step S3: the mixed reality wearable device 5 acquires a signal generated when a patient contacts with the auxiliary training device at a preset position 1, wherein the preset position 1 is arranged on a contact surface of the patient and the auxiliary training device, the signal is used for judging whether the patient contacts with the auxiliary training device, the signal is sent by sensing devices such as a pressure sensor, an infrared sensor or a photoelectric switch, and the sensing devices are arranged at the preset position 1;
when the sensing equipment is a pressure sensor, the signal is a pressure change signal sent by the pressure sensor when the patient is in contact with the auxiliary training equipment at the preset position 1;
when the sensing equipment is an infrared sensor, the signal is a signal sent by the infrared sensor when the patient is in contact with the auxiliary training equipment at the preset position 1;
when the sensing equipment is a photoelectric switch, the signal is a signal sent by the photoelectric switch when the patient is in contact with the auxiliary training equipment at the preset position 1;
step S4: the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
In step S4, setting a predetermined trigger position 11 on the three-dimensional visual image of the auxiliary training device, where the predetermined trigger position 11 corresponds to a predetermined position 1 on a real auxiliary training device one to one;
when a patient wears a mixed reality wearable device 5 and contacts with the auxiliary training device, the mixed reality wearable device 5 acquires a signal generated on the preset position 1 when the sensing device acquires that the patient contacts with the auxiliary training device, and the signal triggers a preset trigger position 11 corresponding to the preset position 1 on a three-dimensional visual image of the auxiliary training device to switch a preset scene;
the preset scenes comprise any color change scenes generated by the patient in the preset position 1 and the auxiliary training equipment, or any pre-loaded pattern change scenes, or any sound change scenes, or any tone change scenes, or a combination of the above scenes;
the color change scene generated by the predetermined trigger position 11 comprises:
random variation of preset color types;
or random variation of preset same color system color types;
or the preset regularity change of the brightness of the same color;
or the preset regular change of the saturation of the same color;
the pattern change scene generated by the predetermined trigger position 11 includes:
any random variation of the preloaded pattern;
or a regular variation of the preloaded pattern;
the sound change scene generated by the predetermined trigger position 11 comprises:
random variations of any sound pre-loaded;
the pitch change scenario generated by the predetermined trigger position 11 includes:
random variations of any pitch that are preloaded;
or a regular variation of any pitch that is pre-loaded.
Another embodiment of the method is described with reference to fig. 4, 5, 6, and 12:
FIG. 4 is a schematic diagram of a pressure signal comparison process according to the present invention;
FIG. 5 is a flow chart of the pressure values generated when the patient makes contact with the training aid at a current predetermined location compared to the pressure values generated when the patient makes contact with a previous predetermined location;
fig. 6 is a schematic structural diagram showing regular changes when a second predetermined scene on the three-dimensional visual image of the auxiliary training device is compared with a first predetermined scene when the second pressure value is greater than the first pressure value;
fig. 12 is a schematic structural diagram of a mixed reality wearable device;
the method comprises the steps of detecting the pressure value change generated by a contact point when the affected limb of a patient contacts with auxiliary training equipment in the training process, and when the rehabilitation degree of the affected limb is not good, the pressure value of the contact point of the affected limb and the auxiliary training equipment in the training process is small, and when the affected limb is recovered in rehabilitation training, the pressure value of the contact point of the affected limb and the auxiliary training equipment in the training process is improved.
Steps S10 and S20 in fig. 4 are the same as steps S1 and S2 in fig. 1; description is not repeated;
step S30: acquiring a pressure signal generated by the patient at the predetermined position when the patient is in contact with the auxiliary training device;
in step S30, a pressure sensor is disposed at the predetermined position 1, and when the patient contacts the auxiliary training device at the predetermined position 1, a pressure signal is generated, wherein the pressure signal corresponds to a pressure value;
setting a first pressure value to be X1, wherein the first pressure value is a pressure value obtained after a pressure signal output by the pressure sensor is processed when the affected limb of the patient is in contact with the auxiliary training equipment at a last preset position, and the first pressure value X1 is switched to a first preset scene corresponding to the preset trigger position;
setting a second pressure value as X2, wherein the second pressure value is a pressure value obtained after the pressure signal output by the pressure sensor is processed when the affected limb of the patient contacts the auxiliary training equipment at the current preset position, and the second pressure value X2 switches a second preset scene corresponding to the preset trigger position;
in the training process, when the affected limb of the patient is in contact with the auxiliary training equipment at the current preset position, the pressure sensor arranged at the current preset position acquires a pressure signal, and the pressure signal is converted into a second pressure value after being processed;
acquiring a pressure signal acquired by a pressure sensor arranged at a last preset position when an affected limb of a patient is contacted with the auxiliary training equipment at the last preset position, and converting the pressure signal into a first pressure value after the pressure signal is processed;
step S40: comparing the pressure signals and outputting a processing result;
in step S40, the pressure signal is acquired by the mixed reality wearable device 5, the second pressure value is compared with the first pressure value, and a processing result is output;
step S50: acquiring a processing result, and triggering a preset trigger position corresponding to the preset position on the virtual image to switch a corresponding preset scene;
in step S50, referring to the description shown in fig. 5, the mixed reality wearable device 5 obtains a processing result, and the processing result triggers the predetermined trigger position 11 corresponding to the predetermined position 1 on the three-dimensional visual image of the auxiliary training device to switch to a corresponding predetermined scene;
when X2 > X1, changes in the second predetermined scene from the first predetermined scene exhibit a regular change, the regular change comprising:
presetting the regular change of the brightness of the same color;
or the preset regular change of the saturation of the same color;
or a regular variation of the preloaded pattern;
or a regular variation of any pitch pre-loaded;
for example:
the preset brightness of the same color is changed regularly along with the increase of the pressure value, namely, the larger the pressure value is, the brighter the color is;
or the preset saturation of the same color is changed regularly along with the increase of the pressure value, namely the larger the pressure value is, the higher the saturation of the color is;
or the pre-loaded pattern shows regular change along with the increase of the pressure value, namely the larger the pressure value is, the more complex the pattern is;
or the pre-loaded tone shows regular change along with the increase of the pressure value, namely the larger the pressure value is, the higher the tone is;
when X2 < X1, changes in the second predetermined scene from the first predetermined scene exhibit a regular change, the regular change comprising:
presetting the regular change of the brightness of the same color;
or the preset regular change of the saturation of the same color;
or a regular variation of the preloaded pattern;
or a regular variation of any pitch pre-loaded;
for example:
the preset brightness of the same color is changed regularly along with the increase of the pressure value, namely the smaller the pressure value is, the darker the color is;
or the preset saturation of the same color is changed regularly along with the increase of the pressure value, namely the smaller the pressure value is, the lower the saturation of the color is;
or the pre-loaded pattern shows regular change along with the increase of the pressure value, namely the larger the pressure value is, the simpler the pattern is;
or the pre-loaded tone shows regular change along with the increase of the pressure value, namely the tone is lower when the pressure value is larger;
referring to fig. 2, fig. 6 and fig. 7, a case is described in which the predetermined scene is a pattern regularity change, and the color change and the pitch change of the predetermined scene are consistent with the regularity of the pattern change:
in fig. 2, in the upper step training process, 1 is the previous preset position, and 2 is the current preset position;
in fig. 6, 111 is the predetermined trigger position corresponding to the previous predetermined position 1, and 222 is the predetermined trigger position corresponding to the current predetermined position 2; when a patient contacts with the previous preset position 1, a pressure sensor arranged at the previous preset position 1 acquires a pressure signal, the pressure signal is converted into a first pressure value after being processed, the mixed reality wearable device acquires the first pressure value, a preset trigger position 111 corresponding to the previous preset position 1 presents a first preset scene, and the first preset scene is a pre-loaded pattern, as shown in the figure, three circular patterns with the same size;
in the process of continuing the training for going up steps, when the affected limb of the patient contacts the auxiliary training device at the current preset position 2, the pressure sensor arranged at the current preset position 2 acquires a pressure signal, the pressure signal is converted into a second pressure value after being processed, the mixed reality wearable device acquires the second pressure value and performs comparison processing with the first pressure value, and when the second pressure value is greater than the first pressure value, the preset trigger position 222 corresponding to the preset position 2 on the three-dimensional visual image of the auxiliary training device is triggered to switch to a corresponding second preset scene, as shown in the figure, compared with the first preset scene, the second preset scene is added with a circular pattern with the same size, and is easily recognized by the patient;
similarly, when the limb of the patient contacts the training aid at the current predetermined position 3, the pressure sensor arranged at the current predetermined position 3 acquires a pressure signal, the pressure signal is converted into a second pressure value after being processed, the mixed reality wearable device acquires the second pressure value and performs comparison processing with the first pressure value, the first pressure value at this time is the pressure value acquired by the pressure sensor arranged at the previous predetermined position 2 when the limb of the patient contacts the training aid at the previous predetermined position 2, and when the second pressure value is greater than the first pressure value, the predetermined trigger position 333 corresponding to the predetermined position 3 on the three-dimensional visual image of the training aid is triggered to switch to a corresponding predetermined scene, as shown in the figure, the predetermined trigger position 333 is compared with the predetermined scene presented by the predetermined trigger position 222, a circular pattern of the same size is added;
when the second pressure value is smaller than the first pressure value, the predetermined trigger position 333 corresponding to the predetermined position 3 on the three-dimensional visual image of the auxiliary training device is triggered to switch to a corresponding predetermined scene, and compared with the predetermined scene presented by the predetermined trigger position 222, the predetermined trigger position 333 reduces a circular pattern with the same size, as shown in fig. 7, in the embodiment, the pressure value at the current predetermined position is compared with the pressure value at the last predetermined position, and according to the comparison result, the predetermined scene is regularly switched, so that the patient knows that each predetermined scene change corresponds to the feedback of each training effect, the interestingness of the patient in the training process can be increased, and the enthusiasm of the patient in training can be improved.
Another embodiment of S30, S40, and S50 in fig. 4 is:
as shown in fig. 8;
step S30: acquiring a pressure signal generated by the patient at the predetermined position when the patient is in contact with the auxiliary training device;
setting an initial pressure value as X, wherein the initial pressure value is a pressure value converted after a pressure signal output by the pressure sensor is processed when the affected limb of the patient makes first contact with the auxiliary training equipment at a preset position, namely an initial preset position, at the beginning of a training period, and the initial pressure value X is corresponding to the preset trigger position to switch an initial preset scene;
setting a second pressure value as X2, wherein the second pressure value is a pressure value obtained after the pressure signal output by the pressure sensor is processed when the affected limb of the patient contacts the auxiliary training equipment at the current preset position, and the second pressure value X2 switches a second preset scene corresponding to the preset trigger position;
step S40: comparing the pressure signals and outputting a processing result;
in the training process, when the affected limb of the patient is in contact with the auxiliary training equipment at the current preset position, the pressure sensor arranged at the current preset position acquires a pressure signal, the pressure signal is converted into a pressure value after being processed, and the pressure value is compared with the initial pressure value;
step S50: acquiring a comparison processing result, and triggering a preset trigger position corresponding to the preset position on the virtual image to switch a corresponding preset scene;
when X2 > X, the second predetermined scene has a significant change in color or pattern or sound or tone from the initial predetermined scene;
when X2= X, the second predetermined scene is the same as the initial predetermined scene;
when X2 < X, the second predetermined scene has a significant change in color or pattern or sound or tone from the initial predetermined scene;
the second predetermined scene is clearly distinguishable in color or pattern or sound or tone when X2 < X and when X2 > X.
Referring to fig. 2, fig. 9, fig. 10 and fig. 11, a case is described in which the predetermined scene is a pattern regularity change, and the color change and the pitch change of the predetermined scene are consistent with the regularity of the pattern change:
in fig. 2, in the upper step training process, 1 is an initial preset position, and 2 is a current preset position;
in fig. 9, 111 is the predetermined trigger position corresponding to the initial predetermined position 1, and 222 is the predetermined trigger position corresponding to the current predetermined position 2; when a patient contacts the initial preset position 1, a pressure sensor arranged at the initial preset position 1 acquires a pressure signal, the pressure signal is converted into an initial pressure value after being processed, the mixed reality wearable device acquires the initial pressure value, and an initial preset scene presented by a preset trigger position 111 corresponding to the initial preset position 1 is a pre-loaded pattern, as shown in fig. 9, three circular patterns with the same size are presented;
in the continuous upper step training process, when the affected limb of the patient contacts the auxiliary training equipment at the current preset position 2, the pressure sensor arranged at the current preset position 2 acquires a pressure signal, the pressure signal is converted into a second pressure value after being processed, and the mixed reality wearable equipment acquires the second pressure value and compares the second pressure value with the initial pressure value;
when the second pressure value is greater than the initial pressure value, triggering the predetermined trigger position 222 corresponding to the predetermined position 2 on the three-dimensional visualization image of the auxiliary training device to switch to a corresponding second predetermined scene, as shown in fig. 9, the second predetermined scene corresponding to the predetermined trigger position 222 is increased by 4 circular patterns with the same size compared with the initial predetermined scene corresponding to the predetermined trigger position 111, and compared with the initial predetermined scene, the circular patterns are obviously different and are easily recognized by the patient;
similarly, when the affected limb of the patient contacts the auxiliary training equipment at the current preset position 3, the pressure sensor arranged at the current preset position 3 acquires a pressure signal, the pressure signal is converted into a second pressure value after being processed, and the mixed reality wearable equipment acquires the second pressure value and compares the second pressure value with the initial pressure value;
when the second pressure value is greater than the initial pressure value, triggering a predetermined trigger position 333 corresponding to the predetermined position 3 on the three-dimensional visual image of the auxiliary training device to switch to a corresponding second predetermined scene, as shown in fig. 9, adding 4 circular patterns with the same size to the second predetermined scene corresponding to the predetermined trigger position 333, compared with the initial predetermined scene corresponding to the predetermined trigger position 111;
when the second pressure value is smaller than the initial pressure value, triggering a predetermined trigger position 333 corresponding to the predetermined position 3 on the three-dimensional visual image of the auxiliary training device to switch to a corresponding second predetermined scene, as shown in fig. 10, the second predetermined scene corresponding to the predetermined trigger position 333 is reduced by 2 circular patterns with the same size as compared with the initial predetermined scene corresponding to the predetermined trigger position 111;
when the second pressure value is equal to the initial pressure value, triggering a predetermined trigger position 333 corresponding to the predetermined position 3 on the three-dimensional visual image of the auxiliary training device to switch to a corresponding second predetermined scene, as shown in fig. 11, wherein the second predetermined scene corresponding to the predetermined trigger position 333 has the same pattern as the initial predetermined scene corresponding to the predetermined trigger position 111;
this embodiment simplifies the change mode of predetermined scene into three kinds, initial predetermined scene promptly, when being less than the initial pressure value, the change of second predetermined scene with be higher than the initial pressure value when, the change of second predetermined scene, the patient of being convenient for remember firmly and distinguish, change through obvious predetermined scene, know the training effect of oneself to arouse patient's training enthusiasm, improve training efficiency, help promoting recovered effect.
A storage medium having stored therein a plurality of instructions adapted to be loaded and executed by a processor:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
Further, the instructions are also adapted to be loaded and executed by the processor to:
acquiring a pressure signal generated when a patient is in contact with the auxiliary training device;
comparing the pressure signals and outputting a processing result;
and acquiring a processing result, and triggering a preset trigger position corresponding to the preset position on the virtual image to switch a corresponding preset scene.
Further, the instructions are also adapted to be loaded and executed by the processor to:
when the patient is in contact with the auxiliary training equipment, switching a second preset scene at a preset trigger position corresponding to the preset position on the virtual image corresponding to the pressure value generated at the current preset position;
when the patient is in contact with the auxiliary training equipment, switching a first preset scene according to a preset trigger position corresponding to the preset position on the virtual image by using a pressure value generated at the last preset position;
when the patient is in contact with the auxiliary training device, when the pressure value generated at the current preset position is higher or lower than the pressure value generated at the last preset position, the change of the second preset scene and the first preset scene presents a regular change.
The foregoing storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
A knee rehabilitation training terminal, comprising: a mixed reality wearable device, the mixed reality wearable device further comprising a processor adapted to implement instructions; and a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
Further, the mixed reality wearable device further comprises a processor adapted to implement instructions; and a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to:
acquiring a pressure signal generated at the predetermined location by the patient upon contact with the training aid.
Comparing the pressure signals and outputting a processing result;
and acquiring a processing result, and triggering a preset trigger position corresponding to the preset position on the virtual image to switch a corresponding preset scene.
Further, the mixed reality wearable device further comprises a processor adapted to implement instructions; and a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to:
when the patient is in contact with the auxiliary training equipment, switching a second preset scene at a preset trigger position corresponding to the preset position on the virtual image corresponding to the pressure value generated at the current preset position;
when the patient is in contact with the auxiliary training equipment, switching a first preset scene according to a preset trigger position corresponding to the preset position on the virtual image by using a pressure value generated at the last preset position;
when the patient is in contact with the auxiliary training device, when the pressure value generated at the current preset position is higher or lower than the pressure value generated at the last preset position, the change of the second preset scene and the first preset scene presents a regular change.
The wearable equipment of aforesaid mixed reality includes mixed reality glasses or mixed reality helmet or can realize the smart machine that can wear of arbitrary of mixed reality function.
Based on the same inventive concept, the embodiment of the invention also provides a knee joint rehabilitation training system, which is used for realizing the training method provided by any one of the preferred embodiments. Fig. 13 is a flow chart of a knee joint rehabilitation training system according to an embodiment of the present invention, the system at least includes:
the knee joint rehabilitation training terminal;
the knee joint rehabilitation training terminal is connected with the action information processing device and is used for acquiring action information processed by the action information processing device;
the action information processing device is connected with the action information acquisition device and used for storing and processing the action information of the patient acquired by the action information acquisition device;
the action information acquisition device is arranged on the training medium and is used for acquiring signals generated when the patient contacts with the training medium.
The training medium is auxiliary training equipment, can be a step for rehabilitation training, and can also be plane auxiliary training equipment, such as a flat ground or a flat ground mat;
the training medium can also be a carrier for the patient to contact with the auxiliary training device, and can be a carrier attached to the sole of the patient, such as a shoe cover or an adhesive sticker; or a carrier attached to the auxiliary training device;
the training medium can also be a bulge arranged on the auxiliary training device, and the bulge is arranged on the contact surface of the patient and the auxiliary training device;
the selection of the carrier can be selected according to the environmental requirements or training requirements, and further, due to the difference of individual recovery degrees of patients, the carrier is preferably a thin soft material which can be fixed on soles or soles of the patients, such as silica gel or woven cloth or a rubber material or a synthetic resin material;
on the contact surface of the training aid with the patient,
or a motion information acquisition device is arranged on a carrier of the patient contacting the auxiliary training equipment;
as shown in fig. 14, the auxiliary training device is a step for rehabilitation training, a carrier 4 is fixed on the step 6, the carrier 4 may be an adhesive tape, and an action information collecting device is installed on the carrier 4 or the step 6;
as shown in fig. 15, 7 is a plane training aid, 4 is a carrier fixed on the plane training aid, the carrier 4 may be an adhesive tape, and an action information collecting device is installed on the plane training aid 7 or the carrier 4;
the action information acquisition device is a pressure sensor, an infrared sensor, a photoelectric switch or other sensors;
the action information acquisition device is connected with the action information processing device, the action information processing device is used for storing and processing the action information of the patient acquired by the action information acquisition device, and the action information of the patient comprises whether the patient is in contact with the auxiliary training equipment or not;
the action information acquisition device is an upper computer program arranged at a computer end, and the upper computer program acquires signals acquired by the sensor and processes the signals into digital signals which can be read by the knee joint rehabilitation training terminal;
the knee joint rehabilitation training terminal is a mixed reality wearable device, and the mixed reality wearable device acquires the digital signal through a network interface;
the mixed reality wearable device comprises a processor adapted to implement instructions; and a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
When in use, a patient wears the mixed reality wearable device, trains on the auxiliary training device, and when the patient contacts with the auxiliary training device, the action acquisition device arranged on the contact surface of the auxiliary training device and the patient acquires the action information of the patient, namely whether the patient contacts the auxiliary training equipment or not, the action information of the patient is judged by signals sent by a pressure sensor, an infrared sensor, a photoelectric switch or other sensors, the signal sent by the sensor is acquired by the upper computer program, the upper computer program carries out digital processing on the signal to obtain a digital signal which can be read by the mixed reality wearable device, the digital signal is acquired by a mixed reality wearable device worn by the patient through network transmission, and triggering a preset triggering position corresponding to the preset position on the three-dimensional visual image of the auxiliary training equipment displayed in the mixed reality wearable equipment to switch a preset scene.
The mixed reality technology comprises augmented reality and augmented virtual, and refers to a new visual environment generated by combining real and virtual worlds, wherein physical and digital objects coexist in the new visual environment and interact in real time.
The training aid according to the invention is not limited to the rehabilitation training stage shown in fig. 2.
The training medium of the present invention is not limited to the rehabilitation training step and the planar auxiliary training device, and is not limited to the adhesive or shoe cover fixed on the auxiliary training device, which is only a preferred embodiment of the present invention and is not intended to limit the present invention, and any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A knee joint rehabilitation training method is applied to a mixed reality wearable device, and is characterized in that: the method comprises the following steps:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
2. The knee joint rehabilitation training method according to claim 1, characterized in that:
the virtual image is obtained by performing 1: 1, reconstructing a three-dimensional visual image;
the preset position is arranged on the contact surface of the patient and the auxiliary training equipment;
the signal is used for judging whether the patient is in contact with the auxiliary training equipment or not;
the preset trigger positions are arranged on the three-dimensional visual image and correspond to the preset positions one by one;
the preset scenes comprise any color random change scenes generated by the patient when the patient is in contact with the auxiliary training device at the preset position, or any pre-loaded pattern change scenes, or any sound change scenes, or any tone change scenes, or a combination of the above.
3. The knee joint rehabilitation training method according to claim 1, characterized in that:
acquiring a pressure signal generated at a last preset position when a patient is in contact with the auxiliary training equipment;
acquiring a pressure signal generated at a current preset position when a patient is in contact with the auxiliary training equipment;
comparing the pressure signal generated by the current preset position with the pressure signal generated by the last preset position;
and acquiring a processing result, and triggering a preset trigger position corresponding to the preset position on the virtual image to switch a corresponding preset scene.
4. The knee joint rehabilitation training method according to claim 3, characterized in that:
when the patient is in contact with the auxiliary training equipment, acquiring a pressure value generated at the current preset position, wherein the pressure value and a preset trigger position corresponding to the preset position on the virtual image are switched to a second preset scene to establish a corresponding relation;
when the patient is in contact with the auxiliary training equipment, acquiring a pressure value generated at the last preset position, wherein the pressure value and a preset trigger position corresponding to the preset position on the virtual image are switched to a first preset scene to establish a corresponding relation;
when the pressure value generated at the current preset position is higher or lower than the pressure value generated at the last preset position, the change of the second preset scene and the first preset scene presents a regular change.
5. A storage medium having stored therein a plurality of instructions adapted to be loaded and executed by a processor:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
6. A knee joint rehabilitation training terminal comprises: a mixed reality wearable device, the mixed reality wearable device further comprising a processor adapted to implement instructions; and a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to:
acquiring a virtual image corresponding to the auxiliary training equipment;
based on a mixed reality technology, in the mixed reality wearable device, the virtual image and a real scene of the auxiliary training device generate an overlapped visual effect;
acquiring a signal generated when a patient contacts the auxiliary training device at a preset position;
the signal triggers a preset trigger position corresponding to the preset position on the virtual image to switch a preset scene.
7. A knee joint rehabilitation training system is characterized by comprising:
a knee joint rehabilitation training terminal of claim 6;
the knee joint rehabilitation training terminal is connected with the action information processing device and is used for acquiring action information processed by the action information processing device;
the action information processing device is connected with the action information acquisition device and used for storing and processing the action information of the patient acquired by the action information acquisition device;
the action information acquisition device is arranged on the training medium and is used for acquiring signals generated when the patient contacts with the training medium.
CN202011218137.6A 2020-11-04 2020-11-04 Knee joint rehabilitation training method, storage medium, terminal and system Pending CN112233771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011218137.6A CN112233771A (en) 2020-11-04 2020-11-04 Knee joint rehabilitation training method, storage medium, terminal and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011218137.6A CN112233771A (en) 2020-11-04 2020-11-04 Knee joint rehabilitation training method, storage medium, terminal and system

Publications (1)

Publication Number Publication Date
CN112233771A true CN112233771A (en) 2021-01-15

Family

ID=74121491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011218137.6A Pending CN112233771A (en) 2020-11-04 2020-11-04 Knee joint rehabilitation training method, storage medium, terminal and system

Country Status (1)

Country Link
CN (1) CN112233771A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113241150A (en) * 2021-06-04 2021-08-10 华北科技学院(中国煤矿安全技术培训中心) Rehabilitation training evaluation method and system in mixed reality environment
CN114367086A (en) * 2021-12-31 2022-04-19 华南理工大学 Lower limb rehabilitation training game system
TWI762369B (en) * 2021-07-02 2022-04-21 長庚大學 Mixed-reality guided exercise training system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113241150A (en) * 2021-06-04 2021-08-10 华北科技学院(中国煤矿安全技术培训中心) Rehabilitation training evaluation method and system in mixed reality environment
TWI762369B (en) * 2021-07-02 2022-04-21 長庚大學 Mixed-reality guided exercise training system
CN114367086A (en) * 2021-12-31 2022-04-19 华南理工大学 Lower limb rehabilitation training game system

Similar Documents

Publication Publication Date Title
CN112233771A (en) Knee joint rehabilitation training method, storage medium, terminal and system
US20230058936A1 (en) Augmented reality system for phantom limb pain rehabilitation for amputees
US10206793B2 (en) System and method for conscious sensory feedback
Leong et al. proCover: sensory augmentation of prosthetic limbs using smart textile covers
Antfolk et al. Sensory feedback from a prosthetic hand based on air-mediated pressure from the hand to the forearm skin.
US6344062B1 (en) Biomimetic controller for a multi-finger prosthesis
US11016569B2 (en) Wearable device and method for providing feedback of wearable device
Zhou et al. Human motion tracking for rehabilitation—A survey
AU7355498A (en) Artificial sensibility
EP3157473A1 (en) A haptic feedback device
WO2009026289A2 (en) Wearable user interface device, system, and method of use
Sadihov et al. Prototype of a VR upper-limb rehabilitation system enhanced with motion-based tactile feedback
US20120130280A1 (en) Legs rehabilitation device and legs rehabilitation method using the same
JPS61217172A (en) Apparatus for moving electric energy to living body tissure or therefrom
CN109562256A (en) Correlation technique for electric current to be delivered to the equipment of body and is used to treat
CN112105319B (en) Method for setting up a myoelectric prosthesis system, surface electrode assembly, electrode and fastening element
EP1207823A2 (en) Emg control of prosthesis
Koiva et al. Shape conformable high spatial resolution tactile bracelet for detecting hand and wrist activity
CN103815991A (en) Double-passage operation sensing virtual artificial hand training system and method
CN213424602U (en) Knee joint rehabilitation training system
CN108209913A (en) For the data transmission method and equipment of wearable device
Babiloni et al. The estimation of cortical activity for brain‐computer interface: applications in a domotic context
CN106096220B (en) A kind of acupoint information methods of exhibiting, relevant device and system
Anlauff et al. VibeWalk: Foot-based tactons during walking and quiet stance
CN205507686U (en) Wearable device of foot with virtual reality control function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination