CN113499173A - Real-time instance segmentation-based terrain recognition and motion prediction system for lower limb prosthesis - Google Patents
Real-time instance segmentation-based terrain recognition and motion prediction system for lower limb prosthesis Download PDFInfo
- Publication number
- CN113499173A CN113499173A CN202110780493.5A CN202110780493A CN113499173A CN 113499173 A CN113499173 A CN 113499173A CN 202110780493 A CN202110780493 A CN 202110780493A CN 113499173 A CN113499173 A CN 113499173A
- Authority
- CN
- China
- Prior art keywords
- lower limb
- terrain
- information
- real
- prosthesis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2/00—Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
- A61F2/50—Prostheses not implantable in the body
- A61F2/60—Artificial legs or feet or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2/00—Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
- A61F2/50—Prostheses not implantable in the body
- A61F2/68—Operating or control means
- A61F2/70—Operating or control means electrical
- A61F2002/704—Operating or control means electrical computer-controlled, e.g. robotic control
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Transplantation (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Evolutionary Biology (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Bioinformatics & Computational Biology (AREA)
- Cardiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Vascular Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a real-time instance segmentation-based terrain recognition and motion prediction system for a lower limb prosthesis, which comprises the following components: the binocular camera collects topographic information around the lower artificial limb, the image collection and information transmission hardware platform collects information obtained by the binocular camera and uploads the information to the cloud server, the image collection and information transmission hardware platform is electrically connected with the lower artificial limb controller, and the lower artificial limb controller controls the motion mode of the lower artificial limb; a target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module are deployed on the cloud server, and terrain information is processed by the modules in sequence and then fed back to an image acquisition and information transmission hardware platform. The invention realizes the intelligent optimization and promotion of the lower limb artificial limb in multiple movement modes, so that the lower limb artificial limb can accurately sense the distribution condition of the surrounding terrain in real time, correct and adjust the movement control strategy in time, and promote the movement capability and wearing comfort of the lower limb artificial limb wearer.
Description
Technical Field
The invention relates to the field of active environment perception of lower limb prostheses in a complex terrain environment, in particular to a terrain recognition and motion prediction system of the lower limb prostheses based on real-time instance segmentation.
Background
The lower limb artificial limb is used for assisting the disabled to exercise and rehabilitation training, is an external power mechanical disabled assisting device which is integrated with the technologies of machinery, electronics, a sensor, intelligent control, transmission and the like and designed according to the ergonomic structural characteristics of the disabled, provides a series of auxiliary functions of support, protection, exercise and the like for the disabled, and the intelligent control level and the comfort level of the lower limb artificial limb determine the exercise capacity of a wearer to a great extent.
In the chinese invention patent document with publication number CN209422174U, a dynamic prosthesis environment recognition system with integrated vision is disclosed, which comprises a prosthesis body, a power module, a motion sensing module, a vision detection module and a control module, wherein the power module is used for making the prosthesis body move, the motion sensing module is used for obtaining state information of the prosthesis body, the vision detection module is used for obtaining peripheral environment information of the prosthesis body, the control module can judge road conditions and obstacle information around a human body through the obtained state information of the prosthesis body and the peripheral environment information of the prosthesis body, predict a motion trend of the prosthesis and judge a motion intention of the human body, thereby controlling the power module to make the prosthesis body move reasonably to assist a patient to adapt to different road conditions or cross obstacles; the system can sense the movement intention of the human body in advance and continuously detect the road condition environment around the human body in the process of using the artificial limb by the patient, has strong data feedback real-time property and stability, and is convenient for the patient to use.
At present, the traditional artificial limb for lower limb has low intelligent degree, single movement mode and prolonged strategy switching; inaccurate identification of surrounding terrain environment, prolonged terrain identification and no prediction capability on terrain; the practicability is poor in a continuously-changing complex terrain environment, and the like. These problems not only prevent the wearer of the lower limb prosthesis from recovering a good mobility, but even risk injury to the wearer.
Disclosure of Invention
In view of the shortcomings in the prior art, it is an object of the present invention to provide a real-time instance segmentation-based terrain identification and motion prediction system for a lower limb prosthesis.
According to the real-time instance segmentation-based lower limb prosthesis terrain recognition and motion prediction system provided by the invention, a local end comprises a lower limb prosthesis, a lower limb prosthesis controller, a binocular camera and an image acquisition and information transmission hardware platform, and a cloud end comprises a cloud server;
the binocular camera is installed on the lower limb artificial limb and used for collecting topographic information around the lower limb artificial limb, the image collection and information transmission hardware platform collects information obtained by the binocular camera and uploads the information to the cloud server, and the lower limb artificial limb controller controls the motion mode of the lower limb artificial limb;
the cloud server is provided with a target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module, and terrain information is processed by the modules in sequence and then fed back to the lower limb prosthesis controller.
Preferably, the lower limb prosthesis comprises a knee joint having one degree of freedom and an ankle joint having two active degrees of freedom of flexion and extension and supination/inversion.
Preferably, the binocular camera is installed at the front end of the knee joint part and used for acquiring RGB images of the terrain around the lower limb prosthesis, and the binocular camera is configured to: 1280 × 720 resolution, 30 frames/sec sampling rate.
Preferably, the image acquisition and information transmission hardware platform is located in a gap inside the lower limb prosthesis structure body, receives the RGB images acquired by the binocular camera in real time, transmits the RGB images to the cloud server through a wireless network, receives terrain information fed back by the cloud server in real time, and outputs the terrain information to the lower limb prosthesis motion controller.
Preferably, the cloud server is provided with a CPU and a GPU, and the CPU has the characteristic of multithreading and provides services for multiple users at the same time; the GPU accelerates the RGB image processing speed.
Preferably, the lower limb prosthesis controller controls the motion of the lower limb prosthesis and comprises the following steps:
step S1: acquiring motion data of lower limbs of a healthy person in different environmental environments based on the IMU, and storing the motion data in a lower limb prosthesis controller;
step S2: the lower limb prosthesis controller controls and receives the current topographic data of the lower limb prosthesis sent by the cloud server;
step S3: the lower limb prosthesis controller controls the lower limb prosthesis to adopt a motion state suitable for the terrain according to the motion data of the lower limbs of the human body;
step S4: for terrain or obstacles that are still some distance away, the lower limb prosthesis controller predicts whether a passing and predicted arrival time is likely based on the current direction and speed of motion to achieve a smoother state switch from software.
Preferably, the target detection module uses a trained and optimized YoloV4-Lite convolutional neural network based on MobileNet to preliminarily identify different terrain information in the RGB image, marks the terrain information respectively by using Box, and outputs the terrain information to the example segmentation module.
Preferably, the example segmentation module uses a trained Deep Snake convolutional neural network based on cyclic convolution to extract an initial contour of a Box obtained by target detection by using an Extreme Points method, and further uses the Deep Snake network to deform the initial contour to obtain a terrain contour accurately fitting each terrain, and outputs the terrain contour to the feature matching module.
Preferably, the feature matching module is used for performing feature extraction and feature matching on pixels, collected by the binocular camera and processed by the example segmentation module, in the same-time same-type terrain contour range respectively by using a speedup Robust Features method, calculating feature point depth information by combining binocular camera parameters according to the same feature point parallax, and further calculating three-dimensional coordinates of the feature point depth information.
Preferably, the position size calculation module calculates the position, the length, the width and the range of the identified terrain according to the coordinates of the feature points located at the edges of the contour, and calculates the length, the width, the height and the position information of the identified steps, obstacles and the like according to the coordinates of the corner points of the contour.
Compared with the prior art, the invention has the following beneficial effects:
1. the real-time instance segmentation-based lower limb prosthesis terrain identification and motion prediction system realizes intelligent optimization and promotion of the lower limb prosthesis in multiple motion modes, enables the lower limb prosthesis to accurately sense the surrounding terrain distribution situation in real time, corrects and adjusts the motion control strategy in time, and promotes the motion capability and wearing comfort of the lower limb prosthesis wearer.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic structural diagram of a real-time example segmentation-based lower limb prosthesis terrain identification and motion prediction system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a depth separable convolution in a real-time instance segmentation based lower limb prosthesis terrain identification and motion prediction system according to an embodiment of the present application;
FIG. 3 is a block diagram of a Yolov4 network in a real-time example segmentation based lower limb prosthesis terrain identification and motion prediction system according to an embodiment of the present application;
FIG. 4 is a diagram of a circular convolution based Deep Snake network in a real-time instance segmentation based lower limb prosthesis terrain identification and motion prediction system according to an embodiment of the present application;
fig. 5 is a schematic diagram of binocular vision positioning in china of a real-time example segmentation-based system for recognizing topography of a lower limb prosthesis and predicting movement in the embodiment of the present application.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The utility model provides a lower limb artificial limb topography discernment and motion prediction system based on real-time example is cut apart, refers to figure 1, includes local end and high in the clouds, and local end includes lower limb artificial limb, lower limb artificial limb controller, binocular camera, image acquisition and information transmission hardware platform, and the high in the clouds includes the cloud ware.
The lower limb artificial limb has multiple movement modes, the limb and the foot of the artificial limb are made of ABS resin materials based on 3D printing, the lower limb artificial limb comprises a knee joint and an ankle joint, and the knee joint and the ankle joint are made of 1060 aluminum alloy materials. The knee joint and the ankle joint of the lower limb artificial limb are driven by a Maxon motor, the knee joint has one degree of freedom, and the ankle part has two active degrees of freedom of flexion and extension and outward rotation/inward inversion.
The binocular camera is installed at knee joint position front end on the lower limb artificial limb to it is fixed through electronic cloud platform, be used for gathering the RGB image of the topography around the lower limb artificial limb, the configuration of binocular camera is: 1280 × 720 resolution, 30 frames/sec sampling rate. The image acquisition and information transmission hardware platform is located in a gap in the lower limb artificial limb structure body, receives RGB images acquired by the binocular camera in real time, transmits the RGB images to the cloud server through a wireless network, receives terrain information fed back by the cloud server in real time, outputs the terrain information to the lower limb artificial limb motion controller, and controls the motion mode of the lower limb artificial limb.
The method for controlling the motion of the lower limb prosthesis by the lower limb prosthesis controller comprises the following steps:
step S1: acquiring motion data of healthy people walking and running on sand, grassland and asphalt ground, ascending and descending ramps and steps, crossing barriers with different widths and heights and human lower limbs based on the IMU, and storing the motion data in a lower limb prosthesis controller;
step S2: the lower limb prosthesis controller controls and receives the current topographic data of the lower limb prosthesis sent by the cloud server;
step S3: the lower limb prosthesis controller controls the lower limb prosthesis to adopt a motion state suitable for the terrain according to the motion data of the lower limbs of the human body;
step S4: for the terrain or obstacles with a certain distance, the lower limb prosthesis controller predicts whether the terrain or the obstacles possibly pass and the predicted arrival time according to the current movement direction and speed, and prepares for movement strategy switching, so as to realize smoother state switching from software and improve the comfort level of the prosthesis.
A target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module are deployed on the cloud server, terrain information is processed by the modules in sequence and then fed back to the image acquisition and information transmission hardware platform, and the image acquisition and information transmission hardware platform sends a control instruction to the lower limb prosthesis controller.
The cloud server is provided with a CPU and a GPU, the CPU has the characteristic of multithreading and provides service for multiple users at the same time; the GPU accelerates the RGB image processing speed. The system has the advantages of high expandability in performance and a redundancy backup mechanism in consideration of the problems of increase of the number of users and failure.
The target terrain detection module needs to perform fine-grained image detection, designs and builds a detection frame based on a deep convolutional neural network and having good real-time performance and high accuracy, and determines the category and the accurate contour of a target object through the convolutional neural network. And collecting a detection data set of the target terrain, completing manual labeling, and performing work such as training set division, verification set division, test set division, data preprocessing and the like. And designing and building a deep convolutional neural network with a proper structure, designing a proper loss function, and training the deep convolutional neural network on a data set. Continuously debugging the hyper-parameters and the network structure according to a training result, and aiming at training a deep convolution neural network with good real-time performance and high identification accuracy. The invention uses improved yoloV4-Lite based on the Mobilene to identify grassland, sand, asphalt ground and steps, the biggest characteristic of the Mobilene is that the method uses deep separable convolution, and the linear convolution channel by channel can greatly reduce the parameter quantity on the premise of ensuring the characteristic extraction effect, as shown in figure 2.
Referring to fig. 3, the input of the original YOLOV4 network is RGB picture with height 416 and width 416, and features are extracted through the CSPDarknet-53 backbone network. Three feature layers with different sizes are used, multi-scale feature fusion is achieved, the deviation of the target position relative to a default rectangular box (default box) is regressed, and meanwhile, a classification confidence coefficient is output. The original YOLOV4 network structure is complex, the prediction speed is slow, and the network structure needs to be adjusted in order to meet the requirement of detection speed in hand-eye coordination. In consideration of the trade-off between accuracy and speed, the MobileNet network is used as a backbone network to extract features, but the resolution of an input picture is not changed, and the same number of features are used for target detection.
The example segmentation module needs to further perform background segmentation on a series of rectangular frames and terrain category labels obtained by the target terrain detection module to obtain accurate outlines of all target terrains, and requires low algorithm computation complexity and good real-time performance. As shown in fig. 4, the Deep Snake algorithm represents the object shape as its outermost contour, with the parameter quantities being very small, close to the rectangular box representation. Firstly, connecting midpoints of four sides of a rectangular frame of a certain target terrain obtained through identification to obtain a rhombus, inputting four vertexes of the rhombus into a Deep Snake network, predicting and obtaining Offsets pointing to four extreme pixel points of the target terrain at the top, the bottom, the left and the right in an image, and obtaining coordinates of the four extreme pixel points through deformation of the rhombus according to the Offsets; secondly, extending a line segment on each point, and then connecting each line segment to obtain an irregular octagon which is used as an initial contour; then, considering that the object contour can be regarded as a Cycle Graph, the number of adjacent nodes of each node is fixed to be 2, and the adjacent sequence is fixed, so that the iterative optimization problem of the contour by convolution processing of a convolution kernel can be defined, then a deep convolution network based on circular convolution is used for fusing and predicting the characteristics of the current contour point, offset pointing to the object contour is obtained through mapping, and the coordinates of the contour point are deformed through the offset; and finally, continuously repeating the process by taking the initial contour as a starting point until the calculated contour is well attached to the actual edge of the target object, predicting that the obtained Offsets is smaller than a threshold value, and outputting a result consisting of a terrain category label and a coordinate set representing contour points.
In addition, the terrain belongs to a target of a background category, the foreground shielding phenomenon is quite common, and aiming at the problem, a grouping detection method can be adopted to divide discontinuous parts of the terrain cut off by foreground objects into different target detection rectangular frames, and then the outline is respectively extracted from each rectangular frame.
The feature matching module and the position size calculating module are used for respectively extracting Features and matching the Features of pixels, collected by the binocular camera and processed by the example segmentation module, in the same-time same-type terrain contour range, calculating feature point depth information by combining parameters of the binocular camera according to the same feature point parallax, and further calculating three-dimensional coordinates of the feature points, wherein the binocular vision positioning principle is shown in fig. 5. For the identified grassland, sand, asphalt ground and the like, the position, the length, the width and the range of the grassland, the sand, the asphalt ground and the like can be calculated according to the coordinates of the characteristic points positioned at the edge of the outline, and for the identified steps, the obstacles and the like, the length, the width, the height and the position information of the steps, the obstacles and the like can be calculated according to the coordinates of the angular points of the outline.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (10)
1. A real-time instance segmentation based lower limb prosthesis terrain identification and motion prediction system, characterized by: the local end comprises a lower limb prosthesis, a lower limb prosthesis controller, a binocular camera and an image acquisition and information transmission hardware platform, and the cloud end comprises a cloud server;
the binocular camera is installed on the lower limb artificial limb and used for collecting topographic information around the lower limb artificial limb, the image collection and information transmission hardware platform is used for collecting information obtained by the binocular camera and uploading the information to the cloud server, the image collection and information transmission hardware platform is electrically connected with the lower limb artificial limb controller, and the lower limb artificial limb controller is used for controlling the motion mode of the lower limb artificial limb;
the cloud server is provided with a target terrain monitoring module, an instance segmentation module, a feature matching module and a position size calculation module, and terrain information is processed by the modules in sequence and then fed back to an image acquisition and information transmission hardware platform.
2. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the lower limb prosthesis comprises a knee joint having one degree of freedom and an ankle joint having two active degrees of freedom of flexion and extension and supination/inversion.
3. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the binocular camera is arranged at the front end of the knee joint part and used for acquiring RGB images of terrain around the lower limb prosthesis, and the binocular camera is configured as follows: 1280 × 720 resolution, 30 frames/sec sampling rate.
4. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the image acquisition and information transmission hardware platform is located in a gap in the lower limb artificial limb structure body, receives RGB images acquired by the binocular camera in real time, transmits the RGB images to the cloud server through a wireless network, receives terrain information fed back by the cloud server in real time, and outputs the terrain information to the lower limb artificial limb motion controller.
5. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the cloud server is provided with a CPU and a GPU, the CPU has the characteristic of multithreading processing and provides service for multiple users at the same time; the GPU accelerates the RGB image processing speed.
6. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the lower limb prosthesis controller controls the lower limb prosthesis to move comprises the following steps:
step S1: acquiring motion data of lower limbs of a healthy person in different environmental environments based on the IMU, and storing the motion data in a lower limb prosthesis controller;
step S2: the lower limb prosthesis controller controls and receives the current topographic data of the lower limb prosthesis sent by the cloud server;
step S3: the lower limb prosthesis controller controls the lower limb prosthesis to adopt a motion state suitable for the terrain according to the motion data of the lower limbs of the human body;
step S4: for terrain or obstacles that are still some distance away, the lower limb prosthesis controller predicts whether a passing and predicted arrival time is likely based on the current direction and speed of motion to achieve a smoother state switch from software.
7. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 1, wherein: the target detection module preliminarily identifies different terrain information in the RGB image by using a trained and optimized YoloV4-Lite convolutional neural network based on MobileNet, marks the terrain information respectively by using Box, and outputs the terrain information to the example segmentation module.
8. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 7, wherein: the example segmentation module uses a trained Deep Snake convolution neural network based on cyclic convolution to extract an initial contour of a Box obtained by target detection by using an Extreme Points method, and further uses the Deep Snake network to deform the initial contour to obtain a terrain contour accurately attached to each terrain, and the terrain contour is output to the feature matching module.
9. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 8, wherein: the feature matching module is used for respectively extracting Features and matching the Features of pixels, collected by the binocular camera and processed by the example segmentation module, in the same-time terrain contour range of the same type by using a speedup Robust Features method, calculating feature point depth information by combining binocular camera parameters according to the same feature point parallax, and further calculating three-dimensional coordinates of the pixels.
10. A real-time instance segmentation based lower limb prosthetic terrain identification and motion prediction system according to claim 9, wherein: the position size calculation module calculates the position, length, width and range of the identified terrain according to the coordinates of the feature points positioned at the edges of the contour, and calculates the length, width, height and position information of the identified steps, obstacles and the like according to the coordinates of the corner points of the contour.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110780493.5A CN113499173B (en) | 2021-07-09 | 2021-07-09 | Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110780493.5A CN113499173B (en) | 2021-07-09 | 2021-07-09 | Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113499173A true CN113499173A (en) | 2021-10-15 |
CN113499173B CN113499173B (en) | 2022-10-28 |
Family
ID=78012619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110780493.5A Active CN113499173B (en) | 2021-07-09 | 2021-07-09 | Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113499173B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114145890A (en) * | 2021-12-02 | 2022-03-08 | 中国科学技术大学 | Prosthetic device with terrain recognition function |
CN116030536A (en) * | 2023-03-27 | 2023-04-28 | 国家康复辅具研究中心 | Data collection and evaluation system for use state of upper limb prosthesis |
CN116869713A (en) * | 2023-07-31 | 2023-10-13 | 南方科技大学 | Visual-assistance-based artificial limb control method, device, artificial limb and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070050047A1 (en) * | 2005-09-01 | 2007-03-01 | Ragnarsdottlr Heidrun G | System and method for determining terrain transitions |
US20090087029A1 (en) * | 2007-08-22 | 2009-04-02 | American Gnc Corporation | 4D GIS based virtual reality for moving target prediction |
CN104408718A (en) * | 2014-11-24 | 2015-03-11 | 中国科学院自动化研究所 | Gait data processing method based on binocular vision measuring |
CN205031391U (en) * | 2015-10-13 | 2016-02-17 | 河北工业大学 | Road condition recognition device of power type artificial limb |
CN109446911A (en) * | 2018-09-28 | 2019-03-08 | 北京陌上花科技有限公司 | Image detecting method and system |
CN109766864A (en) * | 2019-01-21 | 2019-05-17 | 开易(北京)科技有限公司 | Image detecting method, image detection device and computer readable storage medium |
CN209422174U (en) * | 2018-08-02 | 2019-09-24 | 南方科技大学 | Dynamic artificial limb environment recognition system integrating vision |
CN110901788A (en) * | 2019-11-27 | 2020-03-24 | 佛山科学技术学院 | Biped mobile robot system with literacy ability |
CN110974497A (en) * | 2019-12-30 | 2020-04-10 | 南方科技大学 | Electric artificial limb control system and control method |
CN111174781A (en) * | 2019-12-31 | 2020-05-19 | 同济大学 | Inertial navigation positioning method based on wearable device combined target detection |
CN111247557A (en) * | 2019-04-23 | 2020-06-05 | 深圳市大疆创新科技有限公司 | Method and system for detecting moving target object and movable platform |
CN111658246A (en) * | 2020-05-19 | 2020-09-15 | 中国科学院计算技术研究所 | Intelligent joint prosthesis regulating and controlling method and system based on symmetry |
-
2021
- 2021-07-09 CN CN202110780493.5A patent/CN113499173B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070050047A1 (en) * | 2005-09-01 | 2007-03-01 | Ragnarsdottlr Heidrun G | System and method for determining terrain transitions |
US20090087029A1 (en) * | 2007-08-22 | 2009-04-02 | American Gnc Corporation | 4D GIS based virtual reality for moving target prediction |
CN104408718A (en) * | 2014-11-24 | 2015-03-11 | 中国科学院自动化研究所 | Gait data processing method based on binocular vision measuring |
CN205031391U (en) * | 2015-10-13 | 2016-02-17 | 河北工业大学 | Road condition recognition device of power type artificial limb |
CN209422174U (en) * | 2018-08-02 | 2019-09-24 | 南方科技大学 | Dynamic artificial limb environment recognition system integrating vision |
CN109446911A (en) * | 2018-09-28 | 2019-03-08 | 北京陌上花科技有限公司 | Image detecting method and system |
CN109766864A (en) * | 2019-01-21 | 2019-05-17 | 开易(北京)科技有限公司 | Image detecting method, image detection device and computer readable storage medium |
CN111247557A (en) * | 2019-04-23 | 2020-06-05 | 深圳市大疆创新科技有限公司 | Method and system for detecting moving target object and movable platform |
CN110901788A (en) * | 2019-11-27 | 2020-03-24 | 佛山科学技术学院 | Biped mobile robot system with literacy ability |
CN110974497A (en) * | 2019-12-30 | 2020-04-10 | 南方科技大学 | Electric artificial limb control system and control method |
CN111174781A (en) * | 2019-12-31 | 2020-05-19 | 同济大学 | Inertial navigation positioning method based on wearable device combined target detection |
CN111658246A (en) * | 2020-05-19 | 2020-09-15 | 中国科学院计算技术研究所 | Intelligent joint prosthesis regulating and controlling method and system based on symmetry |
Non-Patent Citations (2)
Title |
---|
林川等: "基于深度学习的轮廓检测算法:综述", 《广西科技大学学报》 * |
薛泽文: "下肢康复外骨骼日常生活环境中的可行走区域感知方法研究", 《中国优秀硕士学位论文全文库》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114145890A (en) * | 2021-12-02 | 2022-03-08 | 中国科学技术大学 | Prosthetic device with terrain recognition function |
CN114145890B (en) * | 2021-12-02 | 2023-03-10 | 中国科学技术大学 | Prosthetic device with terrain recognition function |
CN116030536A (en) * | 2023-03-27 | 2023-04-28 | 国家康复辅具研究中心 | Data collection and evaluation system for use state of upper limb prosthesis |
CN116869713A (en) * | 2023-07-31 | 2023-10-13 | 南方科技大学 | Visual-assistance-based artificial limb control method, device, artificial limb and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113499173B (en) | 2022-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113499173B (en) | Real-time instance segmentation-based terrain identification and motion prediction system for lower artificial limb | |
US10417775B2 (en) | Method for implementing human skeleton tracking system based on depth data | |
Krausz et al. | Depth sensing for improved control of lower limb prostheses | |
CN109344694B (en) | Human body basic action real-time identification method based on three-dimensional human body skeleton | |
Hu et al. | 3D Pose tracking of walker users' lower limb with a structured-light camera on a moving platform | |
CN106022213A (en) | Human body motion recognition method based on three-dimensional bone information | |
CN112025729B (en) | Multifunctional intelligent medical service robot system based on ROS | |
Wang et al. | Quantitative assessment of dual gait analysis based on inertial sensors with body sensor network | |
CN114067358A (en) | Human body posture recognition method and system based on key point detection technology | |
CN107742097B (en) | Human behavior recognition method based on depth camera | |
Varol et al. | A feasibility study of depth image based intent recognition for lower limb prostheses | |
CN117671738B (en) | Human body posture recognition system based on artificial intelligence | |
CN109919137B (en) | Pedestrian structural feature expression method | |
CN114099234B (en) | Intelligent rehabilitation robot data processing method and system for assisting rehabilitation training | |
Hu et al. | A novel method for bilateral gait segmentation using a single thigh-mounted depth sensor and IMU | |
CN103942829A (en) | Single-image human body three-dimensional posture reconstruction method | |
CN102156994B (en) | Joint positioning method for single-view unmarked human motion tracking | |
CN116421372A (en) | Method for controlling a prosthesis, prosthesis and computer-readable storage medium | |
CN110742712A (en) | Artificial limb movement intention identification method and device based on source end fusion | |
CN118522117A (en) | Safety path generation and falling judgment device for assisting user to intelligent closestool | |
Al-dabbagh et al. | Using depth vision for terrain detection during active locomotion | |
CN116206358A (en) | Lower limb exoskeleton movement mode prediction method and system based on VIO system | |
CN114494655A (en) | Blind guiding method and device for assisting user to intelligent closestool based on artificial intelligence | |
Miao et al. | Stereo-based Terrain Parameters Estimation for Lower Limb Exoskeleton | |
Kong et al. | A review of the application of staircase scene recognition system in assisted motion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |