CN113768760B - Control method and system of walking aid and driving device - Google Patents

Control method and system of walking aid and driving device Download PDF

Info

Publication number
CN113768760B
CN113768760B CN202111050505.5A CN202111050505A CN113768760B CN 113768760 B CN113768760 B CN 113768760B CN 202111050505 A CN202111050505 A CN 202111050505A CN 113768760 B CN113768760 B CN 113768760B
Authority
CN
China
Prior art keywords
target object
walking aid
lower limb
motion
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111050505.5A
Other languages
Chinese (zh)
Other versions
CN113768760A (en
Inventor
罗朝晖
张笑千
尚鹏
杨德龙
侯增涛
王博
刘程祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202111050505.5A priority Critical patent/CN113768760B/en
Publication of CN113768760A publication Critical patent/CN113768760A/en
Priority to PCT/CN2021/137589 priority patent/WO2023035457A1/en
Application granted granted Critical
Publication of CN113768760B publication Critical patent/CN113768760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb

Abstract

The application is suitable for the technical field of intelligent walking aid, and provides a control method of the walking aid, which comprises the following steps: acquiring motion information of a target object using a walker, wherein the motion information comprises a lower limb motion state of the target object; inputting the motion information into a trained first preset model to obtain a motion prediction state of a target object; inputting the motion prediction state and the lower limb motion state of the target object into the trained second preset model to obtain the regulation and control parameters of the walking aid; and controlling the moving direction and the moving speed of the walking aid according to the control parameters. The control parameters are obtained according to the motion information of the target object by the method, so that the rehabilitation training method based on the walking aid has self-adaptability, the participation degree of professional personnel in the running of the walking aid can be reduced, and the applicability and the popularization rate of the walking aid are improved.

Description

Control method and system of walking aid and driving device
Technical Field
The application belongs to the technical field of intelligent walking aids, and particularly relates to a control method, a control system, a drive device and a storage medium of a walking aid.
Background
The main pathological manifestation of lower limb dysfunction is that the lower limbs of patients are difficult to form effective support for the bodies of the patients, so that the mobility and the life quality of the patients are greatly damaged, and the common lower limb dysfunction caused by stroke is common. With the development of economy, the demand for quality of life of patients with lower limb dysfunction is increasing, and therefore rehabilitation training to accelerate the recovery of balance ability and muscular strength of patients becomes an inevitable choice for many patients.
At present, because lack the intelligent recovered line equipment that helps of needle, domestic and foreign recovery hospital still uses the artifical rehabilitation training who protects the worker and provide by the specialty to give first place to, and this not only needs to possess the ability of specialty, and the long-term requirement of working to protecting the worker is high moreover, has aggravated the shortage of medical care resource. At present, some exoskeleton rehabilitation robots are available, but generally, the robots can only adjust parameters of equipment by professional research and development personnel, which greatly limits the applicability of the rehabilitation robots, so that the rehabilitation training method based on the rehabilitation robots in the prior art has poor applicability and low popularization rate; meanwhile, the rehabilitation robot cannot be automatically adjusted according to the actual training situation after the parameters are set, so that the fitting degree of the rehabilitation training method and the actual training situation of the user is not high, and the rehabilitation training effect is poor.
Disclosure of Invention
The embodiment of the application provides a control method and system of a walking aid, a driving device and a storage medium, and can solve the technical problems of poor applicability and low popularization rate of a rehabilitation robot in the prior art.
In a first aspect, an embodiment of the present application provides a method for controlling a walking aid, including:
acquiring motion information of a target object using the walking aid, wherein the motion information comprises a lower limb motion state of the target object;
inputting the motion information into a trained first preset model to obtain a motion prediction state of the target object;
inputting the motion prediction state of the target object and the motion state of the lower limb into a trained second preset model to obtain the regulation and control parameters of the walking aid;
and controlling the walking aid to move according to the regulation and control parameters.
Based on the method, the control parameters can be obtained according to the motion information of the target object, so that the moving direction and the moving speed of the walking aid are controlled, the rehabilitation training method based on the walking aid has self-adaptability, the participation degree of professionals in the running of the walking aid can be reduced, and the applicability and the popularity of the walking aid are improved. Meanwhile, the method is adopted to control the walking aid, so that the fitting degree of the movement of the walking aid and the actual training condition of the target object is high, and the rehabilitation training effect of the rehabilitation training method based on the walking aid is better.
In one possible implementation manner of the first aspect, the lower limb movement state includes lower limb movement data of the target object acquired by the inertial sensor, and distance data between the lower limb of the target object and the walking aid acquired by the ranging sensor;
before the inputting the motion prediction state and the lower limb motion state of the target object into the trained second preset model, the method further comprises:
acquiring the stride of the target object by using the lower limb movement data;
based on the distance data, a first distance is obtained, the first distance being a distance between the ankle joint of the target object and the walker in a current direction of movement of the target object.
In one possible implementation manner of the first aspect, the lower limb movement data includes a lifting angle of the lower limb and a length value of the lower limb;
the step length of the target object is obtained by utilizing the lower limb movement data, and the step length comprises the following steps:
acquiring the step length of the target lower limb by using the lifting angle of the target lower limb of the target object and the length value of the target lower limb;
and obtaining the stride based on the step length of the target lower limb.
Illustratively, the lifting angle of the lower limb is obtained by inertial sensors, and the inertial sensors include a first inertial sensor disposed at a thigh part of the lower limb and a second inertial sensor disposed at a shank part of the lower limb; the lifting angle of the lower limb comprises a thigh lifting angle acquired by the first inertial sensor and a shank lifting angle acquired by the second inertial sensor; the length values of the lower limbs comprise thigh length values and shank length values;
the step size is obtained according to the following formula:
D S =D 1 sinθ 1 +D 2 sinθ 2
wherein: d S For said step length, D 1 Is the thigh length value, D 2 Is the value of the length of the lower leg, theta 1 For the thigh elevation angle θ 2 Raising an angle for the lower leg.
In one possible implementation form of the first aspect, the distance data includes a plurality of second distances, and the ranging sensor includes a plurality of laser ranging sensors; the plurality of second distances are distances between the lower leg of the target subject and the plurality of different positions of the walker acquired from the plurality of laser ranging sensors; the first distance is obtained by weighted summation of a plurality of second distances.
In a possible implementation manner of the first aspect, the motion information further includes plantar pressure data of the target object acquired by a pressure sensor.
In a possible implementation manner of the first aspect, the motion prediction state includes a speed prediction state, where the speed prediction state is that the motion speed of the target object is in a safe range or the motion speed of the target object is in a non-safe range; the method further comprises the following steps:
and if the speed prediction state is that the movement speed of the target object is in an unsafe range, controlling the walking aid to brake until the speed of the walking aid is zero.
In a second aspect, embodiments of the present application provide a control system for a walker, comprising a walker, a drive device, a motor disposed on wheels of the walker;
the driving device is used for acquiring motion information of a target object using the walking aid, and the motion information comprises a lower limb motion state of the target object; inputting the motion information into a first preset model after training, and inputting the motion prediction state of the target object and the motion state of the lower limbs into a second preset model after training to obtain the regulation and control parameters of the walking aid; and controlling the moving direction and the moving speed of the walking aid according to the regulating and controlling parameters.
In one possible implementation of the second aspect, the motion information further comprises plantar pressure data of the target object, and the lower limb motion state comprises lower limb motion data of the target object and distance data between the target object lower limb and the walker; the system further comprises an inertial sensor, a plurality of laser ranging sensors and a pressure sensor, wherein the laser ranging sensors are arranged at different positions of the walking aid, the laser ranging sensors are used for acquiring the distance data, the inertial sensor is used for acquiring the lower limb movement data, and the pressure sensor is used for acquiring the sole pressure data of the target object; in use, the inertial sensor and the pressure sensor are worn on the target subject.
In a third aspect, embodiments of the present application provide a control device for a walking aid, including:
a motion information acquisition unit for acquiring motion information of a target object using the walker, the motion information including a lower limb motion state of the target object;
the motion prediction state acquisition unit is used for inputting the motion information into a trained first preset model to acquire a motion prediction state of the target object;
the control parameter acquisition unit is used for inputting the motion prediction state of the target object and the motion state of the lower limbs into a trained second preset model to acquire control parameters of the walking aid;
and the control unit is used for controlling the moving direction and the moving speed of the walking aid according to the regulating and controlling parameters.
In a fourth aspect, embodiments of the present application provide a drive device, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements a method for controlling a walking aid according to any one of the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the method for controlling a walking aid of any one of the above first aspects.
In a sixth aspect, embodiments of the present application provide a computer program product, which, when run on a drive device, causes the drive device to perform the method of controlling a walker according to any one of the above-mentioned first aspects.
It is to be understood that, for the beneficial effects of the second aspect to the sixth aspect, reference may be made to the relevant description in the first aspect, and details are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic view of a control system for a walker according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of a method of controlling a walker according to an embodiment of the present application;
fig. 3 is a schematic diagram of an arrangement of laser ranging sensors according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a deep convolutional neural network model provided in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a deep neural network model provided in an embodiment of the present application;
FIG. 6 is a schematic structural view of a walker control device provided in accordance with an embodiment of the present application;
fig. 7 is a schematic structural diagram of a driving device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather mean "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless otherwise specifically stated.
At present, when rehabilitation training is carried out based on a rehabilitation robot, a professional is generally required to adjust parameters of the rehabilitation robot, and then the rehabilitation robot drives a patient with lower limb dysfunction to carry out rehabilitation training according to the set parameters. Therefore, the dependence of the existing rehabilitation robot on professionals is high, so that the applicability of the rehabilitation robot is low, and the popularization of rehabilitation training based on the rehabilitation robot is limited; and because the rehabilitation robot can not carry out automatically regulated according to the actual training condition after parameter setting, the degree of laminating of the actual movement of the rehabilitation robot and the actual training condition of the user is not high, thereby leading to poor rehabilitation training effect.
According to the control method of the walking aid, the control parameters of the walking aid are obtained by carrying out data analysis on the actual motion information of the target object using the walking aid, and then the moving direction and the moving speed of the walking aid are controlled according to the control parameters, so that the rehabilitation training method based on the walking aid has self-adaptability, and the applicability and the popularity of the walking aid are improved. Meanwhile, the method is adopted to control the walking aid, so that the fitting degree of the movement of the walking aid and the actual training condition of the target object is high, and the rehabilitation training effect of the rehabilitation training method based on the walking aid is better.
Fig. 1 is a schematic structural diagram of a control system of a walking aid according to an embodiment of the present application. The system comprises: comprises a walking aid 101, a motor 102 arranged on wheels of the walking aid, a driving device 103, a distance measuring sensor group 104 connected with the driving device in a communication way, an inertia sensor group 105 and a pressure sensor group 106. Wherein the ranging sensor set 104 comprises a plurality of ranging sensors 1041, the inertial sensor set 105 comprises a plurality of inertial sensors 1051, and the pressure sensor set 106 comprises a plurality of pressure sensors 1061. The distance measuring sensor group is arranged on the walking aid, the inertial sensor group is worn on the lower limbs of a target object in the using process, and the pressure sensor group is worn on the soles and the waist of the target object. In an embodiment of the application, the walker is positioned in front of the target object during use.
In the embodiment, the motion information is obtained through each sensor in the ranging sensor group, the inertial sensor group and the pressure sensor group, wherein the lower limb motion state is obtained through the inertial sensor group and the ranging sensor group; the motion information also includes plantar pressure obtained from a set of pressure sensors. The driving equipment obtains the motion information of the target object from various sensors, inputs the motion information into a trained first preset model and obtains the motion prediction state of the target object; inputting the motion prediction state and the lower limb motion state output by the first preset model into the trained second preset model to obtain the corresponding regulation and control parameters of the walking aid, and controlling the moving direction and the moving speed of the walking aid by the driving equipment through the driving motor according to the regulation and control parameters.
In the system provided by the embodiment of the application, the movement of the walking aid can be controlled according to the regulation and control parameters acquired based on the motion information of the target object, so that the rehabilitation training method based on the walking aid has self-adaptability, the participation degree of professionals in the running of the walking aid can be reduced, and the system is high in applicability and universality. Meanwhile, the fitness degree of the movement of the walking aid in the system and the actual motion condition of the target object is high, so that the rehabilitation training effect based on the system for rehabilitation training is good.
Alternatively, the inertial sensor may be a nine-axis inertial sensor, it should be noted that each inertial sensor may also be another type of inertial sensor, and the inertial sensor is not particularly limited in this embodiment of the application.
In another case, the lower limb movement state in the movement information may be video information or image information obtained by a camera. Illustratively, for example, a driving device in the system may be connected with a camera, the camera is arranged on the walking aid and is used for shooting a video or an image of the lower limb movement condition of the target object using the walking aid, the driving device acquires the video or the image shot by the camera and inputs the obtained video information or the obtained image information and the sole pressure obtained by the pressure sensor group into a first preset model after training to obtain the movement prediction state of the target object; then inputting the video information or the image information and the motion prediction state into a second preset model after training to obtain the regulation and control parameters of the walking aid, wherein the regulation and control parameters are used for controlling the moving direction and the moving speed of the walking aid; controlling the mobility of the walking aid based on the control parameters.
In an alternative embodiment, the control system of the walker stores the data of the training process and trains the first and second pre-set models again using the newly stored data. And parameters of the first preset model and the second preset model are updated, so that the system is dynamically adjusted along with use.
Referring to fig. 2, it is a schematic flow chart of a control method of a walking aid provided in an embodiment of the present application. By way of example and not limitation, the method may include the steps of:
s201, obtaining motion information of a target object using the walking aid, wherein the motion information comprises a lower limb motion state of the target object.
In the embodiment of the present application, specifically, the acquisition of the motion information of the target object may be started after the target object wears the relevant device and starts the walking aid. Wherein the motion information comprises a lower limb motion state of the target object. In one embodiment, the lower limb movement state includes lower limb movement data of the target subject and distance data between the lower limb of the target subject and the walker. In one embodiment, plantar pressure data of the target object acquired by the pressure sensor may also be included.
Alternatively, the motion information may be sensor information obtained by a sensor, as described above, or may be obtained by a camera and a sensor together.
S202, inputting the motion information into a trained first preset model to obtain a motion prediction state of the target object.
In the embodiment of the present application, the motion prediction state is a prediction result of a motion state of the target object at the next time, which is obtained based on the motion information of the target object at the current time. The walking aid can be adjusted in real time at the next moment by predicting the motion state of the target object, so that the walking aid and the target object can move almost simultaneously, and the walking aid can well walk the target object.
In an alternative embodiment, the motion prediction state may be a prediction of different motion parameters of the target object. For example, the motion prediction state may include a speed prediction state, a motion state limb side and a motion capability prediction state, wherein the speed prediction state represents whether the motion speed of the target object is in a safe range, the motion state limb side represents which lower limb of the target object is in a motion state, and the motion capability prediction state represents the motion capability level of the target object.
In the present embodiment, the moving limb side is a prediction of the limb side of the target object in a moving state, that is, used to predict which lower limb is in a moving state. When the lower limbs are in the motion state, the lower limbs are in the motion phase, or the lower limbs are in the lifting state. The exercise-dynamic limb side may be "the left lower limb is in motion" or "the right lower limb is in motion". Optionally, the patients who perform rehabilitation training have different exercise abilities of their limbs, and may have healthy lower limbs on one side and diseased lower limbs on the other side; it is also possible to have both sides of the affected lower limb but one side has stronger motion and the other side has weaker motion. For convenience of understanding, the relative abilities of the lower limbs on both sides are used for differentiation, and the lower limb on the side with relatively strong motor ability of the target object is generally referred to as a healthy limb, and the lower limb on the side with relatively weak motor ability is generally referred to as a diseased limb. When the motion information of a target object using a walking aid is obtained, which lower limb of the target object is a diseased limb and which lower limb of the target object is a healthy limb are obtained at the same time; therefore, the exercise dynamic limb side can be in a 'healthy limb in motion state' or in a 'sick limb in motion state'.
In the present embodiment, the speed prediction state is an evaluation of the movement speed of the target object, specifically, whether or not the movement speed of the target object is within a safe range. Too high or too low a speed of motion of the target object indicates that the target object will be in an unsafe state. Exemplary speed prediction states may be "the speed of movement is in a safe range" or "the speed of movement is in an unsafe range".
In this embodiment, the exercise capacity prediction state is an evaluation of the exercise capacity of the target object, and different exercise capacities are associated with the stride, the step frequency, and the continuous training time of the target object. For example, the exercise capacity prediction state may be "exercise capacity low, or" exercise capacity medium, "or" exercise capacity high.
In one embodiment, the speed prediction state and the dynamic limb side have two possible situations respectively, the motion capability prediction state has three possible situations, and the motion prediction states in 12 can be obtained through permutation and combination. M for motion prediction state x Where x =1, 2, 3 \8230; 11, 12, table 1 illustrates all possible motion prediction states.
TABLE 1
Figure GDA0003728939630000091
Figure GDA0003728939630000101
In the embodiment of the present application, the first preset model is configured to predict a motion prediction state of the target object according to the motion information, input the motion information of the target object to the model, and output the motion prediction state of the target object by the model, where the motion prediction state of the target object may be any one of table 1.
Optionally, the motion information input by the model includes a lower limb motion state and plantar pressure data of the target object, wherein the lower limb motion state includes lower limb motion data and distance data, the lower limb motion data is acquired by an inertial sensor, and the distance data represents a distance between the lower limb of the target object and the walker and is acquired by a ranging sensor. The sole pressure data is acquired through pressure sensors arranged on the soles of the target objects. Optionally, the inertial sensor is a nine-axis inertial sensor, and the distance measuring sensor is a laser distance measuring sensor.
S203, inputting the motion prediction state and the lower limb motion state of the target object into a trained second preset model to obtain the regulation and control parameters of the walking aid.
In the embodiment of the application, the motion prediction state and the lower limb motion state are used as input data of a second preset model, so that the regulation and control parameters of the walking aid are obtained; the obtained regulation and control parameters are closely related to the motion prediction state and the lower limb motion state, and the input of the motion prediction state enables the regulation and control parameters to be predictive, so that the regulation and control of the walking aid is predictive, the walking aid can actively respond to the change of the motion state of a target object, and the rehabilitation training method has good self-adaptability.
In an optional embodiment, before inputting the motion prediction state and the lower limb motion state of the target object into the trained second preset model, the lower limb motion state needs to be preprocessed, specifically, the lower limb motion state comprises lower limb motion data of the target object acquired by an inertial sensor and distance data between the lower limb of the target object and the walking aid acquired by a distance measuring sensor; the pretreatment of the motion state of the lower limbs comprises the following steps: acquiring the stride of the target object by utilizing the lower limb movement data; based on the distance data, a first distance is obtained, wherein the first distance is a distance between the ankle joint of the target object and the walker in a current direction of movement of the target object.
According to the embodiment of the application, before the motion state of the lower limbs is input into the second preset model, the motion data of the lower limbs are converted into the stride of the target object, and the distance data are converted into the first distance. The lower limb movement data collected by the inertial sensor and the distance data between the lower limb of the target object and the walking aid collected by the distance measuring sensor are high in data dimensionality, and the data dimensionality is reduced after conversion, so that the difficulty of model training is reduced, and the accuracy of the output result of the model is improved.
Illustratively, the lower limb movement data includes a lower limb lift angle and a lower limb length value; the method for acquiring the stride of the target object specifically comprises the following steps: acquiring the step length of the target lower limb by using the lifting angle of the target lower limb of the target object and the length value of the target lower limb; and obtaining the stride based on the step length of the target lower limb. By the method, the data information (lower limb length) of the target object can be integrated into the calculation of the step length, so that the obtained data is closer to the actual situation of the target object.
In an alternative embodiment, the elevation angle of the lower limb is obtained by an inertial sensor, the inertial sensor includes a first inertial sensor disposed in a thigh portion of the lower limb, and a second inertial sensor disposed in a shank portion of the lower limb; the lifting angle of the lower limb comprises a thigh lifting angle acquired by the first inertial sensor and a shank lifting angle acquired by the second inertial sensor; the lower limb length values include a thigh length value and a shank length value.
In practical application, the inertial sensors are arranged on the thighs and the shanks of a target object, the lower limbs of a human body are composed of the thighs and the shanks connected through knee joints, the lifting angles of the thighs and the shanks are different in the actual moving process, and the obtained moving distance data are higher in accuracy by respectively detecting the lifting angles of the thighs and the shanks and calculating the moving distance by combining the thigh length and the shank length.
In general, inertial sensors are the primary components used to detect and measure acceleration, tilt, shock, vibration, rotation, and multiple degrees of freedom of motion. In the embodiment of the present application, the inertial sensor may be a nine-axis inertial sensor. In the embodiment of the present application, the elevation angle refers to an angle of a target (for example, thigh and calf of a target object) with a vertical direction detected during the movement.
In practical application, we firstly simulate the typical walking pattern of a patient with lower limb dysfunction caused by stroke. We define the motor phase of stroke patients (taking unilateral lower limb motor dysfunction as an example) as three parts, namely, a double-leg support phase, a healthy limb stepping phase and a diseased limb merging phase. In the starting stage of the walking aid, the two leg supports of the patient touch the ground in parallel, namely the two leg supports; then the patient takes a step in advance for the limb health, takes a step for the limb health and touches the ground with the full sole, the walking aid moves along with the patient, and a supporting and walking aid stage is provided for the patient; when the healthy limb steps to the ground, the healthy limb can provide part of the body weight support, but the walking speed and amplitude of the affected limb are limited, at the moment, the walking aid dynamically adjusts parameters to walk the affected limb so that the affected limb can move to the position of the healthy limb, namely, when the affected limb finishes stepping and the whole sole lands, the affected limb is in a walking phase. The above process is mainly carried out through the experiment and verification of the motion of the patient on a vector plane. In the present application, experiments and verification are performed by the motion of the human body on the sagittal plane, so that the limb motion of the human can be simplified into a joint link diagram no matter how the foot and the upper body move, wherein the thigh and the calf are two links connected by the knee joint respectively.
Based on the above simplified method, the step size in this embodiment is obtained according to the following formula:
D S =D 1 sinθ 1 +D 2 sinθ 2
wherein: d S For said step length, D 1 Is the thigh length value, D 2 Is the value of the length of the lower leg, theta 1 For the thigh elevation angle θ 2 Raising the angle for the lower leg.
And respectively calculating corresponding step lengths of the left leg and the right leg of the target object based on the method, wherein the step length of the target object is the sum of the corresponding step lengths of the left leg and the right leg.
In the embodiment of the present application, before inputting the distance data into the second preset model, the distance data is subjected to a conversion process: based on the distance data, a first distance is obtained, wherein the first distance is a distance between the ankle joint of the target object and the walker in a current direction of movement of the target object. Namely, the embodiment of the application converts the distance data into the distance between the ankle joint of the target object and the walking aid. The reason for this is as follows: in practical application, the distance between the waist of the target object and the walking aid is fixed, and under the condition that the target object normally walks, the distance between the feet of the target object and the walking aid in the current moving direction of the target object has a certain range. If the target subject's feet are too close or too far from the walker, this may result in forward or backward tilting of the target subject's body. The distance between the target subject's foot and the walker therefore indicates the tendency of the target subject in the fore-aft direction. Because the motion of the foot is relatively complicated, measurement and calculation have certain difficulty. The technical problem that the difficulty of measuring the distance between the foot and the walking aid is large is solved by converting the distance between the foot and the walking aid of the target object into the distance between the ankle joint of the target object and the walking aid.
In an alternative embodiment, the range data comprises a plurality of second ranges, and the range sensor comprises a plurality of laser range sensors; the plurality of second distances are distances between the lower leg of the target object and the plurality of different positions of the walker acquired from the plurality of laser range sensors; the first distance is obtained by weighted summation of the plurality of second distances.
Illustratively, the plurality of laser ranging sensors are arranged on the walking aid in a triangular array mode, and in the moving process of the walking aid and the target object, the plurality of laser ranging sensors are arranged in front of the target object and at the height of the walking aid close to the ankle joint.
In one embodiment, the number of laser ranging sensors is 6,6 arranged as shown in fig. 3. As shown in fig. 3, the 6 laser ranging sensors are numbered 31, 32, 33, 34, 35, and 36, respectively. Wherein the laser ranging sensors 31 and 34 are wide-range laser ranging sensors in the X direction; the laser ranging sensors 32 and 35 are Y-direction wide laser ranging sensors; the laser ranging sensors 33 and 36 are high-precision laser ranging sensors. Wherein the laser ranging sensors 31, 32 and 33 correspond to the left calf of the target object; the laser range sensors 34, 35 and 36 correspond to the right lower leg of the target subject. Wherein the first distance is obtained according to the following formula:
S L =η L X 1L X 2L X 3
S R =η R X 4R X 5R X 6
wherein X 1 ,X 4 ,X 2 ,X 5 ,X 3 ,X 6 Measured values for laser range sensors with the same lower corner mark, η L ,η R ,μ L ,μ R ,δ L ,δ R Respectively X 1 ,X 4 ,X 2 ,X 5 ,X 3 ,X 6 Is the corresponding weight; s L ,S R First distances corresponding to the target object's left and right ankle joints, respectively. Each weight is an empirical value, generally related to the height of the target object, the distance between two feet in a standing state, and the like, and can be obtained through experiments on different individuals, and the specific method is not described herein again.
In the method, considering that the ankle joint of the target object shows three-axis change in the movement process and the swing amplitude is greatly disturbed, a plurality of laser ranging sensors are adopted to form a measuring array, and the distance measurement is realized through a weighted mean filtering algorithm, so that the result value of the first distance is very accurate.
S204, controlling the moving direction and the moving speed of the walking aid according to the regulating and controlling parameters.
In an embodiment of the application, the control parameter may be a speed variation value of the walking aid in various directions. Taking the target object as a reference, wherein the facing direction of the target object is the front, the back-to-back direction of the target object is the back, the left side of the target object is the left side, and the right side of the target object is the right side; the regulation and control parameters comprise forward speed, leftward speed, rightward speed and backward speed, and the speeds in all directions are non-negative values and are output in a 4 multiplied by 1 matrix form. Optionally, the output value of the manipulated variable is a value of the walking aid which speed needs to be increased in the corresponding direction, for example, if the output manipulated variable is (0.2, 0), it indicates that a speed increase of 0.2m/s in the forward direction is needed. At the moment, the moving direction and the moving speed of the walking aid are controlled based on the control parameters, and the control parameters are converted into the motion parameters of the walking aid (such as the output power of a forward motor and the steering angle of a steering motor) so that the walking aid moves according to the corresponding motion parameters.
Alternatively, the control parameter may be a motion parameter of the walking aid itself, for example the control parameter may be an output power value of a motor controlling the advancing of the walking aid which needs to be increased or decreased, and/or a steering angle of a motor controlling the steering of the walking aid. At this time, the walking aid can be controlled directly according to the control parameters.
In an alternative embodiment, step S204 controls the moving direction and the moving speed of the walker according to the control parameters, specifically, in case the speed prediction state is that the moving speed of the target object is in a safe range, the moving direction and the moving speed of the walker are controlled according to the control parameters. At this point the walker is in a follow-up mode.
Optionally, the method further includes: and if the speed prediction state is that the movement speed of the target object is in an unsafe range, controlling the walking aid to brake until the speed of the walking aid is zero. At this point the walker enters a safe mode.
Generally, if the moving speed of the target object is in an unsafe range, it is very dangerous to indicate that the moving speed of the target object is too fast or too slow; an imbalance in speed before a fall may be the case if the speed is too fast, and an excessive speed may be the case if the movement is too long or the lower limbs of the target subject are weak.
Optionally, the waist of the target object is connected with the walking aid through a belt, and in the safe mode, the waist of the target object is tightened by controlling the walking aid while controlling the walking aid to brake, so that the walking aid is prevented from being pushed by the target object.
In an alternative embodiment, the method of reentering the rehabilitation training process after controlling the walker into the safe mode may comprise: if the safe mode continues to exceed the preset time, the walking aid is controlled to re-enter the rehabilitation training process (step S201 is executed again).
Optionally, the method of entering the rehabilitation training process again after controlling the walker to enter the safe mode may further include: and re-entering the rehabilitation training process after the safety mode relieving instruction is obtained. Wherein the safe mode releasing instruction can be sent by the target object through a button on the walking aid after determining the state of the target object.
To better explain the idea of the present application, the first preset model in step S202 and the second preset model in step S203 are specifically described below.
In an optional embodiment, the first preset model is a deep convolutional neural network model, and the motion information input into the model comprises the lower limb motion state and plantar pressure data of the target object; before the motion information is input into the first preset model, the motion information is subjected to normalization and reconstruction processing.
Optionally, the lower limb movement state comprises lower limb movement data and distance data. The lower limb movement data may be obtained by a plurality of nine-axis inertial sensors which are worn, in use, on the thigh, calf and toe, respectively, of the target subject. The distance data is obtained by means of a laser distance measuring sensor, which is arranged on the walking aid. The sole pressure value is acquired through a pressure sensor worn on the sole of the target object.
For example, the total number of the nine-axis inertial sensors is 6, the thigh (measuring hip joint), the calf (measuring knee joint) and the toe (measuring ankle joint) which are respectively worn on the left side and the right side of the target object when in use are respectively worn, and the angle, the angular acceleration, the acceleration and the geomagnetism of each joint in the X direction, the Y direction and the Z direction are measured to obtain 78-dimensional data; in addition, each nine-axis inertial sensor has 1-dimensional temperature data, and the temperature of 6 nine-axis inertial sensors is 6-dimensional in total; therefore, the data measured by the 6 nine-axis inertial sensors have 78 dimensions. The number of the laser ranging sensors is 6, the laser ranging sensors are arranged on the walking aid and used for measuring the distance between the lower leg of the target object and the walking aid, the data measured by the 6 laser ranging sensors are 15-dimensional in total, and the data comprise 6-dimensional distance data, 6-dimensional temperature data and 3-dimensional distance difference data on the left side and the right side (the difference value of the distance data of the two laser ranging sensors which are symmetrical on the left side and the right side). The number of the plantar pressure sensors is 2, the plantar pressure sensors are respectively arranged at the bottoms of two feet of the target object and are used for measuring the plantar pressure, and the data measured by the 2 plantar pressure sensors are 102-dimensional in total. Motion information is 195 dimensions in total, and 195 is reconstructed for data 1A 5 x 13 matrix. Inputting the reconstructed matrix into a trained deep convolution neural network, wherein the convolution kernel is 3 multiplied by 3, and the activation function is a softmax function
Figure GDA0003728939630000161
The output result is one of the states shown in table 1. Fig. 4 is a schematic structural diagram of a deep convolutional neural network model in an embodiment of the present application. As shown in fig. 4, the deep convolutional neural network model of the embodiment of the present application includes an input layer, a convolutional layer, a pooling layer, a full-link layer, and an output layer. The data processing process comprises the following steps: the data (motion information) of 15 × 13 dimensions after normalization and reconstruction processing is input to an input layer, and after processing of a convolution layer, a pooling layer, and a full link layer, 3 × 1-dimensional data (motion prediction state) is output from an output layer.
In an alternative embodiment, the second predetermined model is a deep neural network model, and the data input into the model includes the motion state of the lower limb and the motion prediction state output by the first predetermined model.
In order to facilitate the training of the model, the lower limb movement data and the distance data in the lower limb movement state are preprocessed, and the preprocessing comprises the following steps: acquiring the stride of the target object by utilizing the lower limb movement data; based on the distance data, a first distance is obtained, wherein the first distance is a distance between the ankle joint of the target object and the walker in a current direction of movement of the target object. The input lower limb movement state of the second preset model comprises the following steps: a stride of the target subject and two first distances corresponding to the target subject's left and right limbs. For a specific calculation method, reference is made to the detailed description of step S203, which is not repeated herein.
In an embodiment of the application, the output of the second preset model is a control parameter, in particular a velocity of the walker in various directions can be graphed. Taking the target object as a reference, wherein the facing direction of the target object is the front, the back-to-back direction of the target object is the back, the left side of the target object is the left side, and the right side of the target object is the right side; the regulation and control parameters comprise forward speed, leftward speed, rightward speed and backward speed, and the speeds in all directions are non-negative values and are output in a 4 multiplied by 1 matrix form. Fig. 5 is a schematic structural diagram of a deep neural network model in an embodiment of the present application. As shown in fig. 5, the deep neural network model in the embodiment of the present application includes an input layer, a hidden layer, and an output layer, where there are a plurality of hidden layers. The input data are 6 x 1 dimensional data, and comprise 3 dimensional data included in the motion prediction state, the stride is 1 dimension, and the two first distances are 1 dimension respectively; data of 4 × 1 dimensions, which are the forward velocity size, the leftward velocity size, the rightward velocity size, and the backward velocity size, are output from the output layer.
In the embodiment of the present application, the first preset model and the second preset model are obtained after being trained. The training process is explained below.
In one embodiment, the training method of the first preset model includes: firstly, a first sample library is constructed, wherein the first training sample library comprises a plurality of first samples; and training the deep convolutional neural network by using a plurality of first samples to obtain a first preset model. Wherein the first sample is obtained by the following method: a plurality of patients are selected as experimental objects, the experimental objects show different motion prediction states in the table 1 when the walking aid is used, the motion information of the experimental objects is recorded, the motion prediction states are used as labels, the corresponding motion information is used as first sample data, and a plurality of first samples are obtained. The motion information comprises lower limb motion data acquired by the inertial sensor, distance data between the lower limb of the experimental subject and the walking aid acquired by the distance measuring sensor, and sole pressure data acquired by the pressure sensor. Specifically, the method comprises the following steps: the inertial sensors are nine-axis inertial sensors and are multiple and are respectively arranged on thighs, calves and toes of the target object, and data acquired by the inertial sensors comprise angles, angular accelerations, geomagnetism and temperatures of the nine-axis inertial sensors in the X direction, the Y direction and the Z direction; the distance measuring sensor is a laser distance measuring sensor arranged on the walking aid, and the data collected by the distance measuring sensor is the distance between the walking aid and a target object; the pressure sensors are arranged on two soles of the target object, and the data acquired by the pressure sensors are sole pressure data. The sole pressure data in the embodiment of the present application is sole pressure distribution data.
In one embodiment, the training method of the second preset model comprises the following steps: firstly, a second sample library is constructed, wherein the second training sample library comprises a plurality of second samples; and training the deep neural network by using a plurality of second samples to obtain a second preset model. Wherein the second sample can also be obtained by using the motion information of the experimental object corresponding to the first sample; the difference lies in that the exercise information also comprises waist pressure data, the waist pressure data is obtained from a pressure sensor arranged at the waist of the experimental subject, the waist of the experimental subject is connected with the walking aid, the waist pressure data reflects the speed adaptability of the experimental subject and the walking aid, and the smaller the waist pressure data is, the higher the speed consistency between the experimental subject and the walking aid is. The specific method comprises the following steps: acquiring the stride of the experimental object by using the lower limb movement data of the experimental object in the first sample data, and acquiring first distances corresponding to the left and right limbs of the experimental object respectively based on the distance data of the left and right limbs of the experimental object; inputting the motion prediction state, the stride and the two first distances simulated by the experimental object into the deep neural network for multiple times to obtain multiple groups of regulation and control parameters of the walking aid; respectively carrying out multiple different movement control on the walking aid based on multiple groups of regulation and control parameters, acquiring multiple different waist pressure data corresponding to the multiple groups of regulation and control parameters (carrying out multiple regulation and control on the walking aid, the corresponding waist pressure data after each regulation and control), and taking one group of regulation and control parameters corresponding to the minimum waist pressure data as a label; and the motion prediction state, the lower limb motion data and the distance data of the left and right limbs output by the first preset model corresponding to the experimental object are used as second sample data.
Corresponding to the control method of the walking aid in the above embodiments, fig. 6 shows a block diagram of the control device 6 of the walking aid provided in the embodiments of the present application, and for convenience of explanation, only the parts related to the embodiments of the present application are shown.
Referring to fig. 6, the apparatus includes:
a motion information acquiring unit 61 for acquiring motion information of a target object using the walker, the motion information including a lower limb motion state of the target object;
a motion prediction state obtaining unit 62, configured to input the motion information into a trained first preset model, and obtain a motion prediction state of the target object;
a control parameter obtaining unit 63, configured to input the motion prediction state of the target object and the motion state of the lower limb into a trained second preset model, so as to obtain a control parameter of the walking aid;
and the control unit 64 is used for controlling the moving direction and the moving speed of the walking aid according to the regulating and controlling parameters.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
The apparatus shown in fig. 6 may be a software unit, a hardware unit, or a combination of software and hardware unit built in an existing driving device, may be integrated into the driving device as a separate pendant, or may exist as a separate driving device.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus may be divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 7 is a schematic structural diagram of a driving apparatus according to an embodiment of the present application. As shown in fig. 7, the driving device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71 and a computer program 72 stored in said memory 71 and executable on said at least one processor 70, said processor 70 implementing the steps in any of the respective walker control method embodiments described above when executing said computer program 72.
The driving device 7 may be a mobile phone, a robot (e.g., an intelligent robot of a hospital), a wearable device (e.g., a smart meter), or other terminal devices. The driving device 7 may also be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. In addition, the drive device 7 can also be integrated into a circuit drive module on the walking aid. The driving device 7 may include, but is not limited to, a processor 70 and a memory 71. It will be understood by those skilled in the art that fig. 7 is merely an example of the driving device 7, and does not constitute a limitation to the driving device 7, and may include more or less components than those shown, or combine some components, or different components, for example, may further include an input-output device, a network access device, and the like.
The Processor 70 may be a Central Processing Unit (CPU), and the Processor 70 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the drive device 7, such as a hard disk or a memory of the drive device 7. The memory 71 may also be an external storage device of the drive device 7 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the drive device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the drive device 7. The memory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 71 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments are implemented.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include at least: any entity or means capable of carrying computer program code to the control means/drive device of the walking aid, a recording medium, a computer Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In some jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and proprietary practices.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. A method of controlling a walker, comprising:
acquiring motion information of a target object using the walker, wherein the motion information comprises a lower limb motion state of the target object;
inputting the motion information into a trained first preset model to obtain a motion prediction state of the target object, wherein the motion prediction state comprises a speed prediction state, and the speed prediction state is that the motion speed of the target object is in a safe range or the motion speed of the target object is in a non-safe range;
if the speed prediction state is that the movement speed of the target object is in a safe range, inputting the movement prediction state of the target object and the movement state of the lower limbs into a trained second preset model to obtain the regulation and control parameters of the walking aid, and controlling the movement direction and the movement speed of the walking aid according to the regulation and control parameters;
and if the speed prediction state is that the movement speed of the target object is in an unsafe range, controlling the walking aid to brake until the speed of the walking aid is zero.
2. The method of claim 1, wherein the lower limb movement state comprises lower limb movement data of the target subject collected by an inertial sensor and distance data between the lower limb of the target subject and the walker collected by a range finding sensor;
before the inputting the motion prediction state and the lower limb motion state of the target object into the trained second preset model, the method further comprises:
acquiring the stride of the target object by using the lower limb movement data;
based on the distance data, a first distance is obtained, the first distance being a distance between the ankle joint of the target object and the walker in a current direction of movement of the target object.
3. The method of claim 2, wherein the lower limb movement data comprises a lower limb lift angle and a lower limb length value;
the step length of the target object is obtained by utilizing the lower limb movement data, and the step length comprises the following steps:
acquiring the step length of the target lower limb by utilizing the lifting angle of the target lower limb of the target object and the length value of the target lower limb;
and obtaining the stride based on the step length of the target lower limb.
4. The method of claim 3, wherein the elevation angle of the lower limb is obtained by inertial sensors, the inertial sensors including a first inertial sensor disposed in a thigh region of the lower limb and a second inertial sensor disposed in a shank region of the lower limb; the lifting angle of the lower limb comprises a thigh lifting angle acquired by the first inertial sensor and a shank lifting angle acquired by the second inertial sensor; the length values of the lower limbs comprise thigh length values and shank length values;
the step size is obtained according to the following formula:
D S =D 1 sinθ 1 +D 2 sinθ 2
wherein: d S For said step length, D 1 Is the thigh length value, D 2 Is the value of the length of the lower leg, theta 1 For the thigh elevation angle, θ 2 Raising the angle for the lower leg.
5. The method of claim 2, wherein the range data comprises a plurality of second ranges, and the ranging sensor comprises a plurality of laser ranging sensors; the plurality of second distances are distances between the lower leg of the target subject and the plurality of different positions of the walker acquired from a plurality of laser ranging sensors; the first distance is obtained by weighted summation of a plurality of second distances.
6. The method of claim 1, wherein the movement information further comprises plantar pressure data of the target object acquired by a pressure sensor.
7. A control system of a walking aid is characterized by comprising the walking aid, a driving device and a motor arranged on wheels of the walking aid;
the driving device is used for acquiring motion information of a target object using the walking aid, and the motion information comprises a lower limb motion state of the target object; inputting the motion information into a trained first preset model to obtain a motion prediction state of the target object, wherein the motion prediction state comprises a speed prediction state, and the speed prediction state is that the motion speed of the target object is in a safe range or the motion speed of the target object is in a non-safe range;
if the speed prediction state is that the movement speed of the target object is in a safe range, the driving device is used for inputting the movement prediction state of the target object and the movement state of the lower limbs into a trained second preset model to obtain the regulation and control parameters of the walking aid, and the regulation and control parameters are used for controlling the movement direction and the movement speed of the walking aid; controlling the walking aid to move by driving the motor according to the motion prediction state and the regulation and control parameters;
if the speed prediction state is that the movement speed of the target object is in an unsafe range, the driving device is used for driving the motor to control the walking aid to brake until the speed of the walking aid is zero.
8. A drive device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
CN202111050505.5A 2021-09-08 2021-09-08 Control method and system of walking aid and driving device Active CN113768760B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111050505.5A CN113768760B (en) 2021-09-08 2021-09-08 Control method and system of walking aid and driving device
PCT/CN2021/137589 WO2023035457A1 (en) 2021-09-08 2021-12-13 Walking aid control method and system, and driving device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111050505.5A CN113768760B (en) 2021-09-08 2021-09-08 Control method and system of walking aid and driving device

Publications (2)

Publication Number Publication Date
CN113768760A CN113768760A (en) 2021-12-10
CN113768760B true CN113768760B (en) 2022-12-20

Family

ID=78841781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111050505.5A Active CN113768760B (en) 2021-09-08 2021-09-08 Control method and system of walking aid and driving device

Country Status (2)

Country Link
CN (1) CN113768760B (en)
WO (1) WO2023035457A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768760B (en) * 2021-09-08 2022-12-20 中国科学院深圳先进技术研究院 Control method and system of walking aid and driving device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107072867A (en) * 2016-12-26 2017-08-18 深圳前海达闼云端智能科技有限公司 A kind of blind safety trip implementation method, system and wearable device
WO2018081986A1 (en) * 2016-11-03 2018-05-11 浙江大学 Wearable device and real-time step length measurement method for device
CN109223461A (en) * 2018-08-28 2019-01-18 国家康复辅具研究中心 Intelligent walk helper control system
CN109589247A (en) * 2018-10-24 2019-04-09 天津大学 It is a kind of based on brain-machine-flesh information loop assistant robot system
CN109771225A (en) * 2017-11-15 2019-05-21 三星电子株式会社 Device of walking aid and its control method
CN210205291U (en) * 2019-01-24 2020-03-31 中国科学技术大学 Follow-up lower limb gait training rehabilitation robot system
CN111557828A (en) * 2020-04-29 2020-08-21 天津科技大学 Active stroke lower limb rehabilitation robot control method based on healthy side coupling
CN111659006A (en) * 2020-06-11 2020-09-15 浙江大学 Gait acquisition and neuromuscular electrical stimulation system based on multi-sensing fusion
CN112842824A (en) * 2021-02-24 2021-05-28 郑州铁路职业技术学院 Training method for lower limb rehabilitation recovery

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4686681B2 (en) * 2004-10-05 2011-05-25 国立大学法人東京工業大学 Walking assistance system
CN102973395B (en) * 2012-11-30 2015-04-08 中国舰船研究设计中心 Multifunctional intelligent blind guiding method, processor and multifunctional intelligent blind guiding device
CN103536424A (en) * 2013-10-26 2014-01-29 河北工业大学 Control method of gait rehabilitation training robot
KR101556117B1 (en) * 2013-12-27 2015-09-30 한국산업기술대학교산학협력단 System and method for estimating joint angles value of knee joint rehabilitation robot
CN104111445B (en) * 2014-07-09 2017-01-25 上海交通大学 Ultrasonic-array auxiliary positioning method and system used for indoor navigation
TWI564129B (en) * 2015-11-27 2017-01-01 財團法人工業技術研究院 Method for estimating posture of robotic walking aid
EP3205269B1 (en) * 2016-02-12 2024-01-03 Tata Consultancy Services Limited System and method for analyzing gait and postural balance of a person
CN107115114A (en) * 2017-04-28 2017-09-01 王春宝 Human Stamina evaluation method, apparatus and system
CN108074632A (en) * 2017-10-31 2018-05-25 深圳市罗伯医疗科技有限公司 Method, terminal device and the computer readable storage medium that walk helper calculates
CN109953761B (en) * 2017-12-22 2021-10-22 浙江大学 Lower limb rehabilitation robot movement intention reasoning method
US10667980B2 (en) * 2018-03-28 2020-06-02 The Board Of Trustees Of The University Of Alabama Motorized robotic walker guided by an image processing system for human walking assistance
CN108903947B (en) * 2018-05-18 2020-07-17 深圳市丞辉威世智能科技有限公司 Gait analysis method, gait analysis device, and readable storage medium
TWI719353B (en) * 2018-10-29 2021-02-21 緯創資通股份有限公司 Walker capable of determining use intent and a method of operating the same
CN110370251A (en) * 2019-08-07 2019-10-25 广东博智林机器人有限公司 Exoskeleton robot, walk help control method, terminal and computer equipment
CN110584894A (en) * 2019-08-12 2019-12-20 珠海格力电器股份有限公司 Wheelchair control method and device and intelligent wheelchair
CN111389006B (en) * 2020-03-13 2023-04-07 网易(杭州)网络有限公司 Action prediction method and device
CN112263440B (en) * 2020-11-17 2022-11-01 南京工程学院 Flexible lower limb exoskeleton and walking aid co-fusion rehabilitation assistance method and device
CN112683288B (en) * 2020-11-30 2022-08-05 北方工业大学 Intelligent guiding method for assisting blind in crossing street in intersection environment
CN113143256B (en) * 2021-01-28 2023-09-26 上海电气集团股份有限公司 Gait feature extraction method, lower limb evaluation and control method, device and medium
CN113768760B (en) * 2021-09-08 2022-12-20 中国科学院深圳先进技术研究院 Control method and system of walking aid and driving device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018081986A1 (en) * 2016-11-03 2018-05-11 浙江大学 Wearable device and real-time step length measurement method for device
CN107072867A (en) * 2016-12-26 2017-08-18 深圳前海达闼云端智能科技有限公司 A kind of blind safety trip implementation method, system and wearable device
CN109771225A (en) * 2017-11-15 2019-05-21 三星电子株式会社 Device of walking aid and its control method
CN109223461A (en) * 2018-08-28 2019-01-18 国家康复辅具研究中心 Intelligent walk helper control system
CN109589247A (en) * 2018-10-24 2019-04-09 天津大学 It is a kind of based on brain-machine-flesh information loop assistant robot system
CN210205291U (en) * 2019-01-24 2020-03-31 中国科学技术大学 Follow-up lower limb gait training rehabilitation robot system
CN111557828A (en) * 2020-04-29 2020-08-21 天津科技大学 Active stroke lower limb rehabilitation robot control method based on healthy side coupling
CN111659006A (en) * 2020-06-11 2020-09-15 浙江大学 Gait acquisition and neuromuscular electrical stimulation system based on multi-sensing fusion
CN112842824A (en) * 2021-02-24 2021-05-28 郑州铁路职业技术学院 Training method for lower limb rehabilitation recovery

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
用于人体跌倒风险检测的步态分析系统研究;姜涛;《中国优秀硕士学位论文全文数据库 基础科学辑》;20210815(第8期);第28-47页 *

Also Published As

Publication number Publication date
WO2023035457A1 (en) 2023-03-16
CN113768760A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
Zhang et al. Accurate ambulatory gait analysis in walking and running using machine learning models
Nagymáté et al. Application of OptiTrack motion capture systems in human movement analysis: A systematic literature review
Shull et al. Quantified self and human movement: a review on the clinical impact of wearable sensing and feedback for gait analysis and intervention
Wong et al. Wearable sensing for solid biomechanics: A review
Li et al. Wearable sensor system for detecting gait parameters of abnormal gaits: A feasibility study
JP2012000343A (en) Gait analysis system and gait analysis method
Joukov et al. Online tracking of the lower body joint angles using IMUs for gait rehabilitation
Varela et al. A kinematic characterization of human walking by using CaTraSys
Guo et al. Balance and knee extensibility evaluation of hemiplegic gait using an inertial body sensor network
Ding et al. Control of walking assist exoskeleton with time-delay based on the prediction of plantar force
CN113940667B (en) Anti-falling walking aid method and system based on walking aid and terminal equipment
CN110974242A (en) Gait abnormal degree evaluation method for wearable device and wearable device
CN113768760B (en) Control method and system of walking aid and driving device
Wang et al. Recognition of the Gait Phase Based on New Deep Learning Algorithm Using Multisensor Information Fusion.
Nagashima et al. Prediction of plantar forces during gait using wearable sensors and deep neural networks
Eguchi et al. Kinetic and spatiotemporal gait analysis system using instrumented insoles and laser range sensor
JP2008161227A (en) Gait analysis system
Peng A novel motion detecting strategy for rehabilitation in smart home
Asif et al. Analysis of Human Gait Cycle With Body Equilibrium Based on Leg Orientation
Dinh et al. Design and implementation of a wireless wearable band for gait analysis
Madrigal et al. Hip and lower limbs 3D motion tracking using a double-stage data fusion algorithm for IMU/MARG-based wearables sensors
Yi et al. Health index monitoring of sports injury rehabilitation training based on wearable sensors
US20240115162A1 (en) Calculation device, calculation method, and program recording medium
CN110693501A (en) Wireless walking gait detection system based on multi-sensor fusion
Wang et al. A New Hidden Markov Model Algorithm to Detect Human Gait Phase Based on Information Fusion Combining Inertial with Plantar Pressure.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant