CN113499173B - Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb - Google Patents

Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb Download PDF

Info

Publication number
CN113499173B
CN113499173B CN202110780493.5A CN202110780493A CN113499173B CN 113499173 B CN113499173 B CN 113499173B CN 202110780493 A CN202110780493 A CN 202110780493A CN 113499173 B CN113499173 B CN 113499173B
Authority
CN
China
Prior art keywords
lower limb
terrain
information
real
artificial limb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110780493.5A
Other languages
Chinese (zh)
Other versions
CN113499173A (en
Inventor
李智军
徐梁睿
李琴剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110780493.5A priority Critical patent/CN113499173B/en
Publication of CN113499173A publication Critical patent/CN113499173A/en
Application granted granted Critical
Publication of CN113499173B publication Critical patent/CN113499173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/60Artificial legs or feet or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2/70Operating or control means electrical
    • A61F2002/704Operating or control means electrical computer-controlled, e.g. robotic control

Abstract

The invention provides a real-time instance segmentation-based terrain identification and motion prediction system for a lower limb prosthesis, which comprises the following components: the binocular camera collects topographic information around the lower artificial limb, the image collection and information transmission hardware platform collects information obtained by the binocular camera and uploads the information to the cloud server, the image collection and information transmission hardware platform is electrically connected with the lower artificial limb controller, and the lower artificial limb controller controls the motion mode of the lower artificial limb; a target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module are deployed on the cloud server, and terrain information is processed by the modules in sequence and then fed back to an image acquisition and information transmission hardware platform. The invention realizes the intelligent optimization and promotion of the lower limb artificial limb in multiple movement modes, so that the lower limb artificial limb can accurately sense the distribution condition of the surrounding terrain in real time, correct and adjust the movement control strategy in time, and promote the movement capability and wearing comfort of the lower limb artificial limb wearer.

Description

Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb
Technical Field
The invention relates to the field of active environment perception of lower limb prostheses in a complex terrain environment, in particular to a terrain recognition and motion prediction system of the lower limb prostheses based on real-time instance segmentation.
Background
The lower limb artificial limb is used for assisting the disabled to exercise and rehabilitation training, is an external power mechanical disabled assisting device which is integrated with the technologies of machinery, electronics, a sensor, intelligent control, transmission and the like and designed according to the ergonomic structural characteristics of the disabled, provides a series of auxiliary functions of support, protection, exercise and the like for the disabled, and the intelligent control level and the comfort level of the lower limb artificial limb determine the exercise capacity of a wearer to a great extent.
In the chinese invention patent document with publication number CN209422174U, a vision-integrated power prosthesis environment recognition system is disclosed, which comprises a prosthesis body, a power module, a motion sensing module, a vision detection module and a control module, wherein the power module is used for making the prosthesis body move, the motion sensing module is used for obtaining state information of the prosthesis body, the vision detection module is used for obtaining peripheral environment information of the prosthesis body, the control module can judge road conditions and obstacle information around a human body through the obtained state information of the prosthesis body and the peripheral environment information of the prosthesis body, and predict a motion trend of the prosthesis and judge a motion intention of the human body, thereby controlling the power module to make the prosthesis body move reasonably to assist a patient to adapt to different road conditions or cross obstacles; the system can sense the movement intention of the human body in advance and continuously detect the road condition environment around the human body in the process of using the artificial limb by the patient, has strong data feedback real-time property and stability, and is convenient for the patient to use.
At present, the traditional artificial limb for lower limb has low intelligent degree, single movement mode and prolonged strategy switching; inaccurate identification of surrounding terrain environment, prolonged terrain identification and no prediction capability on terrain; the practicability is poor in a continuously-changing complex terrain environment, and the like. These problems not only prevent the wearer of the lower limb prosthesis from recovering a good mobility, but even risk injury to the wearer.
Disclosure of Invention
In view of the shortcomings in the prior art, it is an object of the present invention to provide a real-time instance segmentation-based terrain identification and motion prediction system for a lower limb prosthesis.
According to the real-time instance segmentation-based lower limb prosthesis terrain recognition and motion prediction system provided by the invention, a local end comprises a lower limb prosthesis, a lower limb prosthesis controller, a binocular camera and an image acquisition and information transmission hardware platform, and a cloud end comprises a cloud server;
the binocular camera is installed on the lower limb artificial limb and used for collecting topographic information around the lower limb artificial limb, the image collection and information transmission hardware platform collects information obtained by the binocular camera and uploads the information to the cloud server, and the lower limb artificial limb controller controls the motion mode of the lower limb artificial limb;
the cloud server is provided with a target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module, and terrain information is processed by the modules in sequence and then fed back to the lower limb prosthesis controller.
Preferably, the lower limb prosthesis comprises a knee joint having one degree of freedom and an ankle joint having two active degrees of freedom of flexion and extension and supination/inversion.
Preferably, the binocular camera is installed at the front end of the knee joint part and used for acquiring RGB images of the terrain around the lower limb prosthesis, and the binocular camera is configured to: 1280 × 720 resolution, 30 frames/second sampling rate.
Preferably, the image acquisition and information transmission hardware platform is located in a gap in the lower artificial limb structure, receives the RGB images acquired by the binocular camera in real time, transmits the RGB images to the cloud server through a wireless network, receives the terrain information fed back by the cloud server in real time, and outputs the terrain information to the lower artificial limb motion controller.
Preferably, the cloud server is provided with a CPU and a GPU, and the CPU has the characteristic of multithreading and provides services for multiple users at the same time; the GPU accelerates the RGB image processing speed.
Preferably, the lower limb prosthesis controller controls the motion of the lower limb prosthesis and comprises the following steps:
step S1: acquiring motion data of lower limbs of a healthy person in different environmental environments based on the IMU, and storing the motion data in a lower limb prosthesis controller;
step S2: the lower limb prosthesis controller controls and receives the current topographic data of the lower limb prosthesis sent by the cloud server;
and step S3: the lower artificial limb controller controls the lower artificial limb to adopt a motion state suitable for the terrain according to the motion data of the lower limb of the human body;
and step S4: for terrain or obstacles that are still some distance away, the lower limb prosthesis controller predicts whether a passing and predicted arrival time is likely based on the current direction and speed of motion to achieve a smoother state switch from software.
Preferably, the target detection module uses a trained and optimized YoloV4-Lite convolutional neural network based on MobileNet to preliminarily identify different terrain information in the RGB image, marks the terrain information respectively by using Box, and outputs the terrain information to the example segmentation module.
Preferably, the example segmentation module uses a trained Deep Snake convolution neural network based on cyclic convolution to extract an initial contour of the Box obtained by target detection by using an Extreme Points method, and then uses the Deep Snake network to deform the initial contour to obtain a terrain contour accurately fitting each terrain, and outputs the terrain contour to the feature matching module.
Preferably, the feature matching module is used for performing feature extraction and feature matching on pixels, collected by the binocular camera and processed by the example segmentation module, in the same-time same-type terrain contour range respectively by using a speedup Robust Features method, calculating feature point depth information by combining binocular camera parameters according to the same feature point parallax, and further calculating three-dimensional coordinates of the feature point depth information.
Preferably, the position size calculation module calculates the position, length, width and range of the identified terrain according to the coordinates of the feature points located at the edges of the contour, and calculates the length, width, height and position information of the identified steps, obstacles and the like according to the coordinates of the corner points of the contour.
Compared with the prior art, the invention has the following beneficial effects:
1. the real-time instance segmentation-based lower limb prosthesis terrain identification and motion prediction system realizes intelligent optimization and promotion of the lower limb prosthesis in multiple motion modes, enables the lower limb prosthesis to accurately sense the surrounding terrain distribution situation in real time, corrects and adjusts the motion control strategy in time, and promotes the motion capability and wearing comfort of the lower limb prosthesis wearer.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic structural diagram of a real-time example segmentation-based lower limb prosthesis terrain identification and motion prediction system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a depth separable convolution in a real-time instance segmentation based lower limb prosthesis terrain identification and motion prediction system according to an embodiment of the present application;
FIG. 3 is a block diagram of a YOLOV4 network in a real-time example segmentation based lower limb prosthesis terrain identification and motion prediction system according to an embodiment of the present application;
FIG. 4 is a diagram of a Deep Snake network structure based on cyclic convolution in a real-time example segmentation based terrain identification and motion prediction system for a lower limb prosthesis according to an embodiment of the present application;
fig. 5 is a schematic diagram of binocular vision positioning of the real-time segmentation-based system for terrain recognition and motion prediction of a lower limb prosthesis according to the embodiment of the present application.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the invention.
The utility model provides a lower limb artificial limb topography discernment and motion prediction system based on real-time example is cut apart, refers to figure 1, includes local end and high in the clouds, and local end includes lower limb artificial limb, lower limb artificial limb controller, binocular camera, image acquisition and information transmission hardware platform, and the high in the clouds includes the cloud ware.
The lower limb artificial limb has multiple movement modes, the limb and the foot of the artificial limb are made of ABS resin materials based on 3D printing, the lower limb artificial limb comprises a knee joint and an ankle joint, and the knee joint and the ankle joint are made of 1060 aluminum alloy materials. The knee joint and the ankle joint of the lower limb prosthesis are driven by a Maxon motor, the knee joint has one degree of freedom, and the ankle part has two active degrees of freedom of flexion and extension and external rotation/internal inversion.
The binocular camera is installed at knee joint position front end on the lower limb artificial limb to it is fixed through electronic cloud platform, be used for gathering the RGB image of the topography around the lower limb artificial limb, the configuration of binocular camera is: 1280 × 720 resolution, 30 frames/second sampling rate. The image acquisition and information transmission hardware platform is located in a gap in the lower limb artificial limb structure body, receives RGB images acquired by the binocular camera in real time, transmits the RGB images to the cloud server through a wireless network, receives terrain information fed back by the cloud server in real time, outputs the terrain information to the lower limb artificial limb motion controller, and controls the motion mode of the lower limb artificial limb.
The lower artificial limb controller for controlling the movement of the lower artificial limb comprises the following steps:
step S1: acquiring motion data of the lower limbs of a healthy person walking and running on sand, grassland and asphalt ground, climbing up and down ramps and steps and crossing obstacles with different widths and heights on the basis of the IMU, and storing the motion data in a lower limb prosthesis controller;
step S2: the lower artificial limb controller controls and receives the terrain data of the lower artificial limb, sent by the cloud server, where the lower artificial limb is located;
and step S3: the lower limb prosthesis controller controls the lower limb prosthesis to adopt a motion state suitable for the terrain according to the motion data of the lower limbs of the human body;
and step S4: for the terrain or obstacles with a certain distance, the lower limb prosthesis controller predicts whether the terrain or the obstacles possibly pass and the predicted arrival time according to the current movement direction and speed, and prepares for movement strategy switching, so as to realize smoother state switching from software and improve the comfort level of the prosthesis.
A target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module are deployed on the cloud server, terrain information is processed by the modules in sequence and then fed back to the image acquisition and information transmission hardware platform, and the image acquisition and information transmission hardware platform sends a control instruction to the lower limb prosthesis controller.
The cloud server is provided with a CPU and a GPU, the CPU has the characteristic of multithreading and provides service for multiple users at the same time; the GPU accelerates the RGB image processing speed. The system has the advantages of high expandability in performance and a redundancy backup mechanism in consideration of the problems of increase of the number of users and failure.
The target terrain detection module needs to perform fine-grained image detection, designs and builds a detection frame based on a deep convolutional neural network and having good real-time performance and high accuracy, and determines the category and the accurate contour of a target object through the convolutional neural network. And collecting a detection data set of the target terrain, completing manual labeling, and performing work such as training set division, verification set division, test set division, data preprocessing and the like. Designing and building a deep convolution neural network with a proper structure, designing a proper loss function, and training the deep convolution neural network on a data set. Continuously debugging the hyper-parameters and the network structure according to a training result, and aiming at training a deep convolution neural network with good real-time performance and high identification accuracy. The invention uses improved yoloV4-Lite based on the mobilene to identify grasslands, sand, asphalt ground and steps, and the mobilene is characterized in that a deep separable convolution is used, and the channel-by-channel linear convolution can greatly reduce the quantity of parameters on the premise of ensuring the characteristic extraction effect, as shown in figure 2.
Referring to fig. 3, the input of the original YOLOV4 network is RGB picture with height 416 and width 416, and features are extracted through the CSPDarknet-53 backbone network. Three feature layers with different sizes are used, multi-scale feature fusion is achieved, the deviation of the target position relative to a default rectangular box (default box) is regressed, and meanwhile, a classification confidence coefficient is output. The original YOLOV4 network structure is complex, the prediction speed is slow, and the network structure needs to be adjusted in order to meet the requirement of detection speed in hand-eye coordination. In consideration of the tradeoff between accuracy and speed, the MobileNet network is used as a backbone network to extract features, but the resolution of the input picture is not changed, and the same number of features is used for target detection.
The example segmentation module needs to further perform background segmentation on a series of rectangular frames and terrain category labels obtained by the target terrain detection module to obtain accurate outlines of all target terrains, and requires low algorithm computation complexity and good real-time performance. As shown in fig. 4, the Deep Snake algorithm represents the object shape as its outermost contour, with the parameter quantities being very small, close to the rectangular box representation. Firstly, connecting midpoints of four sides of a rectangular frame of a certain target terrain obtained through identification to obtain a rhombus, inputting four vertexes of the rhombus into a Deep Snake network, predicting and obtaining Offsets pointing to four extreme pixel points of the target terrain at the top, the bottom, the left and the right in an image, and obtaining coordinates of the four extreme pixel points through deformation of the rhombus according to the Offsets; secondly, extending a line segment on each point, and then connecting each line segment to obtain an irregular octagon which is used as an initial contour; then, considering that the object contour can be regarded as a Cycle Graph, the number of adjacent nodes of each node is fixed to be 2, and the adjacent sequence is fixed, so that the problem of iterative optimization of the contour by convolution processing of a convolution kernel can be defined, a deep convolution network based on cyclic convolution is used for fusing and predicting the characteristics of the current contour point, offsets pointing to the object contour is obtained through mapping, and the coordinates of the contour point are deformed by the Offsets; and finally, continuously repeating the process by taking the initial contour as a starting point until the calculated contour is well attached to the actual edge of the target object, predicting that the obtained Offsets is smaller than a threshold value, and outputting a result consisting of a terrain category label and a coordinate set representing contour points.
In addition, the terrain belongs to a target of a background category, the foreground shielding phenomenon is quite common, and aiming at the problem, a grouping detection method can be adopted to divide discontinuous parts of the terrain cut off by foreground objects into different target detection rectangular frames, and then the outline is respectively extracted from each rectangular frame.
The feature matching module and the position size calculating module are used for respectively extracting Features and matching the Features of pixels, collected by the binocular camera and processed by the example segmentation module, in the same-time same-type terrain contour range, calculating feature point depth information by combining parameters of the binocular camera according to the same feature point parallax, and further calculating three-dimensional coordinates of the feature points, wherein the binocular vision positioning principle is shown in fig. 5. For the identified grassland, sand, asphalt ground and the like, the position, the length, the width and the range of the grassland, the sand, the asphalt ground and the like can be calculated according to the coordinates of the characteristic points positioned at the edge of the outline, and for the identified steps, the obstacles and the like, the length, the width, the height and the position information of the steps, the obstacles and the like can be calculated according to the coordinates of the angular points of the outline.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the present invention can be regarded as a hardware component, and the devices, modules and units included therein for implementing various functions can also be regarded as structures within the hardware component; means, modules, units for realizing various functions can also be regarded as structures in both software modules and hardware components for realizing the methods.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, merely for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and therefore, are not to be construed as limiting the present application.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (6)

1. A real-time instance segmentation based lower limb prosthesis terrain identification and motion prediction system, characterized by: the local end comprises a lower limb prosthesis, a lower limb prosthesis controller, a binocular camera and an image acquisition and information transmission hardware platform, and the cloud end comprises a cloud server;
the binocular camera is installed on the lower limb artificial limb and used for collecting topographic information around the lower limb artificial limb, the image collection and information transmission hardware platform is used for collecting information obtained by the binocular camera and uploading the information to the cloud server, the image collection and information transmission hardware platform is electrically connected with the lower limb artificial limb controller, and the lower limb artificial limb controller is used for controlling the motion mode of the lower limb artificial limb;
a target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module are deployed on the cloud server, and terrain information is processed by the modules in sequence and then fed back to an image acquisition and information transmission hardware platform;
the target terrain detection module preliminarily identifies different terrain information in the RGB image by using a trained and optimized YoloV4-Lite convolutional neural network based on MobileNet, marks the terrain information respectively by using Box, and outputs the terrain information to the example segmentation module;
the example segmentation module uses a trained Deep Snake convolutional neural network based on cyclic convolution to extract an initial contour of a Box obtained by target detection by using an Extreme Points method, and further uses the Deep Snake network to deform the initial contour to obtain a terrain contour accurately fitting various terrains, and the terrain contour is output to the feature matching module;
the feature matching module is used for respectively extracting Features and matching the Features of pixels, collected by a binocular camera and processed by the example segmentation module, in the same-time same-type terrain contour range by using a Speeded Up Robust Features method, calculating feature point depth information by combining binocular camera parameters according to the same feature point parallax, and further calculating three-dimensional coordinates of the pixels;
the position size calculation module calculates the position, length, width and range of the identified terrain according to the coordinates of the feature points positioned at the edges of the contour, and calculates the length, width, height and position information of the identified steps, obstacles and the like according to the coordinates of the corner points of the contour.
2. A real-time instance segmentation based lower limb prosthetic topography recognition and motion prediction system according to claim 1, wherein: the lower limb prosthesis comprises a knee joint and an ankle joint, wherein the knee joint has one degree of freedom, and the ankle joint has two active degrees of freedom of flexion and extension and external rotation/internal inversion.
3. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the binocular camera is arranged at the front end of the knee joint part and used for acquiring RGB images of terrain around the lower limb prosthesis, and the configuration of the binocular camera is as follows: 1280 × 720 resolution, 30 frames/second sampling rate.
4. A real-time instance segmentation based lower limb prosthetic topography recognition and motion prediction system according to claim 1, wherein: the image acquisition and information transmission hardware platform is located in a gap in the lower limb artificial limb structure body, receives RGB images acquired by the binocular camera in real time, transmits the RGB images to the cloud server through a wireless network, receives terrain information fed back by the cloud server in real time, and outputs the terrain information to the lower limb artificial limb motion controller.
5. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the cloud server is provided with a CPU and a GPU, the CPU has the characteristic of multithreading processing and provides service for multiple users at the same time; the GPU accelerates the RGB image processing speed.
6. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the lower limb prosthesis controller controls the lower limb prosthesis to move comprises the following steps:
step S1: acquiring motion data of the lower limbs of the healthy person in different environments based on the IMU, and storing the motion data in a lower limb prosthesis controller;
step S2: the lower limb prosthesis controller controls and receives the current topographic data of the lower limb prosthesis sent by the cloud server;
and step S3: the lower limb prosthesis controller controls the lower limb prosthesis to adopt a motion state suitable for the terrain according to the motion data of the lower limbs of the human body;
and step S4: for terrain or obstacles that are still at a distance, the lower limb prosthesis controller predicts whether a passing and predicted arrival time is likely based on the current direction and speed of motion to achieve a smoother state switch from software.
CN202110780493.5A 2021-07-09 2021-07-09 Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb Active CN113499173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110780493.5A CN113499173B (en) 2021-07-09 2021-07-09 Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110780493.5A CN113499173B (en) 2021-07-09 2021-07-09 Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb

Publications (2)

Publication Number Publication Date
CN113499173A CN113499173A (en) 2021-10-15
CN113499173B true CN113499173B (en) 2022-10-28

Family

ID=78012619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110780493.5A Active CN113499173B (en) 2021-07-09 2021-07-09 Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb

Country Status (1)

Country Link
CN (1) CN113499173B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114145890B (en) * 2021-12-02 2023-03-10 中国科学技术大学 Prosthetic device with terrain recognition function
CN116030536B (en) * 2023-03-27 2023-06-09 国家康复辅具研究中心 Data collection and evaluation system for use state of upper limb prosthesis

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408718A (en) * 2014-11-24 2015-03-11 中国科学院自动化研究所 Gait data processing method based on binocular vision measuring
CN205031391U (en) * 2015-10-13 2016-02-17 河北工业大学 Road conditions recognition device of power type artificial limb
CN109446911A (en) * 2018-09-28 2019-03-08 北京陌上花科技有限公司 Image detecting method and system
CN209422174U (en) * 2018-08-02 2019-09-24 南方科技大学 A kind of powered prosthesis Context awareness system merging vision
CN110901788A (en) * 2019-11-27 2020-03-24 佛山科学技术学院 Biped mobile robot system with literacy ability
CN110974497A (en) * 2019-12-30 2020-04-10 南方科技大学 Electric artificial limb control system and control method
CN111247557A (en) * 2019-04-23 2020-06-05 深圳市大疆创新科技有限公司 Method and system for detecting moving target object and movable platform
CN111658246A (en) * 2020-05-19 2020-09-15 中国科学院计算技术研究所 Intelligent joint prosthesis regulating and controlling method and system based on symmetry

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453964B (en) * 2005-09-01 2013-06-12 奥瑟Hf公司 System and method for determining terrain transitions
US8229163B2 (en) * 2007-08-22 2012-07-24 American Gnc Corporation 4D GIS based virtual reality for moving target prediction
CN109766864A (en) * 2019-01-21 2019-05-17 开易(北京)科技有限公司 Image detecting method, image detection device and computer readable storage medium
CN111174781B (en) * 2019-12-31 2022-03-04 同济大学 Inertial navigation positioning method based on wearable device combined target detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408718A (en) * 2014-11-24 2015-03-11 中国科学院自动化研究所 Gait data processing method based on binocular vision measuring
CN205031391U (en) * 2015-10-13 2016-02-17 河北工业大学 Road conditions recognition device of power type artificial limb
CN209422174U (en) * 2018-08-02 2019-09-24 南方科技大学 A kind of powered prosthesis Context awareness system merging vision
CN109446911A (en) * 2018-09-28 2019-03-08 北京陌上花科技有限公司 Image detecting method and system
CN111247557A (en) * 2019-04-23 2020-06-05 深圳市大疆创新科技有限公司 Method and system for detecting moving target object and movable platform
CN110901788A (en) * 2019-11-27 2020-03-24 佛山科学技术学院 Biped mobile robot system with literacy ability
CN110974497A (en) * 2019-12-30 2020-04-10 南方科技大学 Electric artificial limb control system and control method
CN111658246A (en) * 2020-05-19 2020-09-15 中国科学院计算技术研究所 Intelligent joint prosthesis regulating and controlling method and system based on symmetry

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的轮廓检测算法:综述;林川等;《广西科技大学学报》;20190415;第30卷(第02期);正文第1-9页 *

Also Published As

Publication number Publication date
CN113499173A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN113499173B (en) Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb
Liu et al. Vision-assisted autonomous lower-limb exoskeleton robot
Krausz et al. Depth sensing for improved control of lower limb prostheses
CN107562052B (en) Hexapod robot gait planning method based on deep reinforcement learning
Hu et al. 3D Pose tracking of walker users' lower limb with a structured-light camera on a moving platform
CN102074034B (en) Multi-model human motion tracking method
CN109344694B (en) Human body basic action real-time identification method based on three-dimensional human body skeleton
US20140328519A1 (en) Method and apparatus for estimating a pose
CN106022213A (en) Human body motion recognition method based on three-dimensional bone information
US20120070070A1 (en) Learning-based pose estimation from depth maps
WO2005088244A1 (en) Plane detector, plane detecting method, and robot apparatus with plane detector
CN107742097B (en) Human behavior recognition method based on depth camera
CN112025729A (en) Multifunctional intelligent medical service robot system based on ROS
Varol et al. A feasibility study of depth image based intent recognition for lower limb prostheses
CN114099234B (en) Intelligent rehabilitation robot data processing method and system for assisting rehabilitation training
CN1766831A (en) A kind of skeleton motion extraction method of the motion capture data based on optics
CN103942829A (en) Single-image human body three-dimensional posture reconstruction method
CN111881888A (en) Intelligent table control method and device based on attitude identification
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking
Struebig et al. Stair and ramp recognition for powered lower limb exoskeletons
CN114419842A (en) Artificial intelligence-based falling alarm method and device for assisting user in moving to intelligent closestool
CN103679712A (en) Human body posture estimation method and human body posture estimation system
Wang et al. A single RGB camera based gait analysis with a mobile tele-robot for healthcare
Miao et al. Stereo-based Terrain Parameters Estimation for Lower Limb Exoskeleton
CN113288611B (en) Operation safety guarantee method and system based on electric wheelchair traveling scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant