CN112587285B - Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method - Google Patents

Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method Download PDF

Info

Publication number
CN112587285B
CN112587285B CN202011434904.7A CN202011434904A CN112587285B CN 112587285 B CN112587285 B CN 112587285B CN 202011434904 A CN202011434904 A CN 202011434904A CN 112587285 B CN112587285 B CN 112587285B
Authority
CN
China
Prior art keywords
information
user
artificial limb
hand
artificial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011434904.7A
Other languages
Chinese (zh)
Other versions
CN112587285A (en
Inventor
宋爱国
胡旭晖
祝佳航
李会军
曾洪
徐宝国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202011434904.7A priority Critical patent/CN112587285B/en
Publication of CN112587285A publication Critical patent/CN112587285A/en
Application granted granted Critical
Publication of CN112587285B publication Critical patent/CN112587285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/54Artificial arms or hands or parts thereof
    • A61F2/58Elbows; Wrists ; Other joints; Hands
    • A61F2/583Hands; Wrist joints
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2/70Operating or control means electrical
    • A61F2/72Bioelectric control, e.g. myoelectric
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2002/6827Feedback system for providing user sensation, e.g. by force, contact or position

Landscapes

  • Health & Medical Sciences (AREA)
  • Transplantation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Vascular Medicine (AREA)
  • Cardiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Prostheses (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a multi-mode information-guided environment-aware myoelectricity artificial limb system and an environment-aware method, wherein the artificial limb system comprises an artificial hand with multi-mode sensing capability, an array myoelectricity sensing module, a wearable camera module, a wearable force touch feedback device, a voice interaction module and a data processor; the artificial limb hand is worn at the stump end of a user through the artificial limb accepting cavity and used for sensing force touch and temperature touch, the array myoelectricity sensing module is integrated on the inner surface of the artificial limb accepting cavity attached to the stump, the wearable camera module is used for achieving the information sensing of the approaching sense of the artificial limb hand to an object to be grabbed, the wearable force touch feedback device is used for achieving the sensing of the user on the information of the approaching sense, the temperature touch and the force touch of the artificial limb, the voice interaction module is used for the interaction of the user and a myoelectricity artificial limb system, and a data processor is integrated in the artificial limb accepting cavity. The invention realizes that the amputation object of the blind person autonomously approaches to the object and grabs the object.

Description

Multi-mode information guide environment perception myoelectricity artificial limb system and environment perception method
Technical Field
The invention relates to a multi-mode information-guided environment-aware myoelectric artificial limb system and an environment sensing method, and belongs to the technical field of myoelectric artificial limb systems.
Background
The vision and touch are essential sensory systems for people to learn the environment, adapt to the environment and further modify the environment. For the disabled with limbs or vision. The lack of one of these senses, or all, is a huge challenge to their ability to restore self-care in life. For forearm amputees, the underlying motor function can be reconstructed by wearing a commercial prosthesis, but the prior art still lacks the haptic perception of force for prosthetic hands, since most disabled persons can determine the interaction state of the prosthetic hand with the environment through their own visual senses. However, when the artificial hand is in the dark or the sight is blocked, the subject cannot obtain the force tactile sensing information during the interaction between the artificial hand and the environment.
For the blind people, the force and touch perception capability of hands are more depended on, and at present, the blind people also adopt the glasses for helping the blind people to guide the blind, detect the money, read and the like based on machine vision or stimulate the image on the tongue. However, the auxiliary devices still rely on the existing force tactile perception capability in use, so that the gap of the rehabilitation device for the amputee blindness patient is filled up.
Disclosure of Invention
The invention aims to provide a multi-mode information-guided environment-sensing myoelectric artificial limb system and an environment sensing method.
The above purpose is realized by the following technical scheme:
a multi-mode information guide environment perception myoelectricity artificial limb system comprises an artificial hand with multi-mode perception capability, an array myoelectricity sensing module, a wearable camera module, a wearable force touch feedback device, a voice interaction module and a data processor; the artificial hand with the multi-mode perception capability is used for perception of force touch and temperature sensation, the artificial hand is worn at the stump end of a user through an artificial limb receiving cavity, the array myoelectricity sensing module is integrated on the inner surface of the artificial limb receiving cavity, which is attached to the stump, the wearable camera module is used for sensing the proximity sensation information of the artificial hand to an object to be grabbed, the wearable force touch feedback device is used for sensing the proximity sensation, the temperature sensation and the force touch information of the user, the voice interaction module is used for interaction between the user and the myoelectricity artificial limb system, the artificial limb receiving cavity is integrated with a data processor, and the data processor is used for myoelectricity intention decoding, artificial hand control and voice interaction of the user and execution of key programs in the system.
The multi-mode information guide environment sensing myoelectricity artificial limb system is characterized in that the artificial hand with the multi-mode sensing capability comprises an electronic skin glove, the inner side of the electronic skin glove comprises an infrared temperature measuring sensor and a laser distance measuring sensor, and the electronic skin glove is provided with a signal acquisition board used for being connected with a data line of the infrared temperature measuring sensor and the laser distance measuring sensor.
The wearable camera module comprises an RGBD sensor used for transmitting a color image and point cloud data based on a binocular depth image in real time.
The wearable force tactile feedback device comprises one or two vibration unit arrays for enabling a wearer to sense vibration, and each vibration unit array is formed by surrounding a large arm by four vibration motors for a circle and uniformly distributed on the front side, the rear side, the outer side and the inner side of the large arm; the wearable force tactile feedback device further comprises a cross sliding table device capable of performing two-dimensional translation and used for enabling a wearer to sense two-dimensional push-pull force.
The method for conducting multi-mode information guide environment perception by using the multi-mode information guide environment perception myoelectricity artificial limb system comprises the following steps:
s1, firstly, a wearable camera module shoots an actual scene picture, then a data processor identifies an object of the picture, then the data processor transmits an identification result of an environmental object to a voice interaction module, and the voice interaction module feeds back the identification result to a user in a voice or sound effect mode;
s2, the user further issues a specific grabbing task through the voice interaction module according to the received environment object information;
s3, a user wears the artificial hand with the multi-mode perception capability to perform object approaching and grabbing tasks step by step, in the first step of object approaching, the wearable camera module captures the spatial coordinates of the object in real time, then the data processor converts the coordinate information into front-back, left-right, up-down approach information which can be understood by the user and feeds the information back to the user by the voice interaction module or the wearable force tactile feedback device, and the approach information perception of the artificial hand on the object to be grabbed is achieved; after the approaching task is finished, a user controls the prosthetic hand to grab an object through the array electromyography sensing module integrated in the prosthetic hand;
and S4, prompting results of approaching tasks and grabbing the tasks by the voice interaction module and the wearable force and touch feedback device, and returning to the step S3 if any task is unsuccessful.
The method for guiding environmental perception through multi-modal information further comprises the step of optimizing perception coding parameters of the force tactile feedback device by means of reinforcement learning in the approaching and grabbing processes in the steps S3 and S4.
The method for guiding environmental perception by multi-modal information comprises the following specific processes in step S1:
s11, capturing a picture by using an RGBD sensor, extracting pixel coordinates and contours of each object in the picture by using an online or offline general object identification method, and identifying the object;
and S12, broadcasting the general direction and the specific identity information of the object to a user at double speed in a short language form by adopting a voice form.
In the method for guiding environmental perception by multi-modal information, the wearable camera module in the step S3 realizes the specific process of perception of the proximity information of the prosthetic hand to the object to be grabbed, and the specific process comprises the following steps:
s311, the positions of the object to be grabbed and the prosthetic hand in the color image are tracked in real time through the color image transmitted by the RGBD sensor, then the color image is aligned with the point cloud data, and the data processor determines the three-dimensional coordinates of the prosthetic hand and the object to be grabbed in the point cloud data according to the pixel coordinates identified in the color image;
and S312, the data processor calculates a space vector taking the three-dimensional coordinate of the prosthetic hand as a starting point and the three-dimensional coordinate of the object to be grabbed as an end point, the vector is converted into a space coordinate system taking the ground as a reference from a space coordinate system taking the wearing camera module as a reference, the components of the converted space vector in three dimensions become the front-back, left-right, up-down and up-down proximity sense information which can be understood by a user, and the information represents the direction and the distance of the object from the prosthetic hand.
In the method for guiding environmental perception by multi-modal information, the specific method for converting the object proximity sense orientation information into the feedback information understood by the user by the wearable force tactile feedback device in the step S3 is as follows:
s321, the data processor represents the direction information of the artificial limb and the object by the movement of the sliding block in four directions of the cross sliding table, wherein the distance information of the artificial limb and the object is represented by the moving speed of the sliding table; meanwhile, the azimuth information is also represented by the vibration of the unit array distributed in four directions, wherein the vibration frequency reflects the distance information of the artificial limb and the object;
s322, when the artificial hand completes the approaching task and obtains the heat sensation and force tactile information, the wearable force tactile feedback device can use a specific vibration prompt to complete approaching, prompt too high temperature and prompt to grab an object, or use a specific push-pull force mode to prompt the user of the direction, distance and speed of the object to be grabbed, and the voice interaction module also sends a specific sound effect prompt to complete approaching, prompt too high temperature and prompt to grab the object, or use three-dimensional surround sound to prompt the direction, distance and moving speed of the object;
s323, simultaneously, the orientation information of the artificial limb and the object is used for generating a spatial audio effect for guiding the user, the specific method is that a virtual scene containing the artificial limb and the object is built in a computer in real time, the orientation distance relation between the artificial limb and the object is restored, in the scene, a sound source is placed in the virtual object, a stereo sound pickup is placed in the virtual artificial limb, a simulated stereo sound received by the virtual sound pickup and generated by the virtual object is transmitted to the user, and the orientation and distance relation of the real object relative to the artificial limb is determined through the identification capability of the user on the spatial audio.
In the method for guiding environmental perception by multi-modal information, the specific method for the prosthetic hand to realize whether the user successfully identifies the approaching task and the grabbing task in the step S4 is as follows:
the laser ranging sensor on the inner side of the palm of the prosthetic hand reads data in real time, when the data of the sensor is smaller than a certain threshold value, the fact that an object is very close to the palm is indicated, the object enters a grabbing space of the prosthetic hand, the approaching task is indicated to be successful at the moment, when the approaching is successful, the infrared temperature measuring sensor starts to detect the temperature of the object, if the temperature exceeds the set threshold value, a user is prompted to pay attention to high temperature by voice or sound effect, the electronic skin detects the contact and force application conditions of the object and the prosthetic hand in real time, when the user controls the prosthetic hand to complete grabbing action, the electronic skin analyzes contact force distribution of the prosthetic hand and the object, and when the total pressure exceeds the certain threshold value, the fact that the object grabbing task is successful is indicated at the moment.
Drawings
FIG. 1 is a schematic view of a blind assist prosthesis system of the present invention;
FIG. 2 is a schematic diagram of a prosthetic hand with force haptic and proximity perception;
fig. 3 is a diagram of processing color images and point cloud data by an RGBD sensor of the wearable camera module;
FIG. 4 is a map of the alignment of sensor data with depth data for its coordinate system transformation;
FIG. 5 is a schematic view of a wearable force haptic feedback device;
FIG. 6 is a schematic view of a cross slide apparatus;
FIG. 7 is a block diagram of a blind assist prosthesis system of the present invention;
FIG. 8 is a system flow diagram of the present invention.
Detailed Description
As shown in fig. 1, the multi-modal information-guided environment-aware myoelectric prosthetic system includes a prosthetic hand 1 with multi-modal sensing capability, an array myoelectric sensing module 2, a wearable camera module 3, a wearable force tactile feedback device 4, a voice interaction module 5, and a data processor 6. The multi-modal perception prosthetic hand has the perception capabilities of skin such as force touch sense, proximity sense and the like. The artificial limb hand 1 is worn at the stump end of a user through an artificial limb receiving cavity, an array myoelectricity sensing module 2 is integrated on the inner surface of the artificial limb receiving cavity, which is attached to the stump, and a data processor 6 is integrated in the artificial limb receiving cavity. The wearable camera module 3 can realize the information perception of the sense of approach of the prosthetic hand to the object to be grabbed. The wearable force tactile feedback device 4 can realize the perception of the artificial limb approaching sense, the temperature sense and the force tactile information of a user. The voice interaction module 5 is used for interaction between a user and the myoelectric prosthetic limb system, such as understanding instructions given by the user, feeding back information of a sense of approach direction, a gripping state and the like to the user, and guiding the user to interact with the environment. The data processor is used for myoelectric intention decoding, artificial hand control, voice interaction and execution of key programs in the system of the user. The system is used for assisting the blind to perceive the environment by amputation objects and guiding the blind to autonomously grab objects in the environment according to the requirements of users.
As shown in fig. 2, the artificial hand with multi-modal sensing capability includes an electronic skin glove, the inner side of the electronic skin glove includes an infrared temperature measuring sensor 8 and a laser distance measuring sensor 9, the electronic skin glove is provided with a signal collecting board 10 for connecting the data lines of the infrared temperature measuring sensor and the laser distance measuring sensor, the finger tip of the glove is embedded with a flexible force touch sensing array 7, the electronic skin adopts a fully flexible material process, and the process includes: a, a temperature detection module consisting of a single optical fiber, a light emitting tube and a receiving tube arranged at two ends of the single optical fiber, a reference optical fiber and a light emitting tube and a receiving tube arranged at two ends of the reference optical fiber; b, a sliding sense detection module consisting of a piezoelectric film printed with an upper electrode and a lower electrode and a single optical fiber; and C, a capacitive pressure sensation detection module consisting of upper and lower layers of conductive cloth and a middle flexible conductive dielectric layer. The specific sensing principle of the electronic skin glove is described in detail in the patent document with the publication number CN111912462A of the applicant, and the detailed description of the invention is omitted. The inner side of the palm of the glove comprises an infrared temperature measuring sensor and a laser distance measuring sensor. The sensor data lines on the glove are connected to a control panel located on the robotic prosthetic hand. The wearable camera module comprises an RGBD sensor, and can transmit a color image and point cloud data based on a binocular depth image in real time.
As shown in fig. 5, the wearable haptic feedback device of the present invention comprises one or two vibration unit arrays 16, which are uniformly distributed on the front side, the rear side, the outer side and the inner side of the forearm, and are configured to allow a wearer to sense vibration; the device also comprises a cross sliding table 17 which can do two-dimensional translation and is worn on the abdomen to sense two-dimensional push-pull force. The equipment can provide necessary guidance for a user when approaching an object and grasping the object by encoding temperature sense and force tactile information sensed by the prosthetic hand and proximity sense information sensed by the camera module into vibration information or push-pull force information. The sliding block on the cross sliding table device can generate two-dimensional plane motion, wherein the sliding block is tightly close to the body, so that a user can feel push-pull feeling in four directions of the plane. The design of the two-dimensional sliding table is not limited to Delta structure, stewart structure, tandem two-axis sliding table, etc., or as shown in FIG. 6. The sliding device comprises two fixed sliding rails which are arranged left and right, a sliding guide rail with the middle moving vertically and a square sliding block with the middle moving horizontally. The pulleys which are axisymmetrical on the fixed sliding rails at the two sides are a pair of driving pulleys, and the traction wire which rounds each pulley is rigidly connected with the square sliding block and cannot move relatively. The structure is differentially controlled by two driving pulleys. Wherein, the two driving wheels rotate in the same direction to control the horizontal movement of the sliding block, the two driving wheels rotate in opposite directions to control the vertical movement of the sliding block, and the two driving wheels rotate in different speeds to control the oblique movement of the sliding block.
The wearable camera module comprises an RGBD sensor used for transmitting a color image and point cloud data based on a binocular depth image in real time.
The module block diagram of the multi-mode information-guided environment-aware myoelectric prosthetic system is shown in fig. 7, and the system is used for rebuilding a sensory system missing from a user and reconnecting the interaction relationship between the user and the external environment. The user and the voice interaction module carry out bidirectional interaction, and the bidirectional interaction is used for sending an instruction or a demand to the system by the user on one hand and responding the instruction execution condition or the result of interaction with the environment by the system on the other hand. The voice interaction module selectively controls the multi-mode perception module and the wearable camera module of the artificial hand by understanding the intention of a user, and information in the process of interacting with the environment, such as the gripping information perceived by the artificial finger end and the identification information and the position information of objects in the environment, is obtained from the modules. The information is fed back to the user in a voice mode on one hand, and is encoded by multi-mode information and then conveyed to the user in a force and touch feedback mode on the other hand. When the user obtains feedback information, namely the interaction relation between the user and the environment is known, the movement intention of the residual limb muscles is identified by the aid of the array myoelectricity sensing array, and the user can finish actions such as autonomous approaching, grasping and the like by the aid of the artificial hand. In the approaching and grasping process, the multi-mode perception module and the wearable camera module positioned on the prosthetic hand still feed back the interaction perception information with the environment to the collector in real time, so that the collector can reliably interact with the environment in real time.
The system flow chart of the invention is shown in fig. 8. Firstly, a user issues an environment search task by using voice, such as 'what is on a desktop at present'. The system identifies the environmental object through the camera module and informs the user through voice feedback, such as 'having a cup, an apple, a banana and a toothbrush on the table'. The user issues a further instruction according to the recognition result of the voice interaction module, such as 'I want to take the cup', and then the system starts to guide the user to approach the object to be grabbed. The method comprises the following steps:
s1, firstly, a wearable camera module shoots an actual scene picture, then a data processor identifies an object of the picture, then the data processor transmits an identification result of an environmental object to a voice interaction module, and the voice interaction module feeds back the identification result to a user in a voice or sound effect mode;
s2, the user further issues a specific grabbing task through the voice interaction module according to the received environment object information; as shown in fig. 3, the objects in the scene are recognized according to the color images transmitted by the RGBD sensor, and then the recognition result is informed to the user through the voice interaction module. And the user further shows the object to be grabbed according to the object recognition result prompted by the voice.
S3, a user wears the artificial hand with the multi-mode perception capability to perform object approaching and grabbing tasks step by step, in the first step of object approaching, the wearable camera module captures the spatial coordinates of the object in real time, then the data processor converts the coordinate information into front-back, left-right, up-down approach information which can be understood by the user and feeds the information back to the user by the voice interaction module or the wearable force tactile feedback device, and the approach information perception of the artificial hand on the object to be grabbed is achieved; after the approaching task is finished, a user controls the prosthetic hand to grab an object through the array electromyography sensing module integrated in the prosthetic hand; as shown in fig. 4 at 14, the position of the object to be grabbed and the prosthetic hand in the color image is tracked in real time according to the color image transmitted by the RGBD sensor. And then aligning the color image with the point cloud data, and determining the three-dimensional coordinates of the prosthetic hand and the object to be grabbed in the point cloud data according to the pixel coordinates identified in the color image. And calculating a space vector taking the three-dimensional coordinate of the prosthetic hand as a starting point and the three-dimensional coordinate of the object to be grabbed as an end point. Then, as shown in fig. 4 at 15, the vector is converted from a spatial coordinate system originally referred to by the wearable camera module to a spatial coordinate system referred to by the ground, and the components of the converted spatial vector in three dimensions become proximity information such as front-back, left-right, up-down, and the like understood by the user, which information indicates the direction and distance of the object from the artificial hand.
And S4, prompting results of approaching tasks and grabbing the tasks by the voice interaction module and the wearable force and touch feedback device, and returning to the step S3 if any task is unsuccessful.
The method for guiding environmental perception by multi-modal information comprises the following specific processes in step S1:
s11, capturing a picture by using an RGBD sensor, extracting pixel coordinates and contours of each object in the picture by using an online or offline general object identification method, and identifying the object;
and S12, broadcasting the general direction and the specific identity information of the object to a user at double speed in a short language form by adopting a voice form.
In the method for guiding the environmental perception by the multi-modal information, the wearable camera module in the step S3 realizes the perception of the information of the proximity sense of the prosthetic hand to the object to be grabbed, and the specific process comprises the following steps:
s311, the positions of the object to be grabbed and the prosthetic hand in the color image are tracked in real time through the color image transmitted by the RGBD sensor, then the color image is aligned with the point cloud data, and the data processor determines the three-dimensional coordinates of the prosthetic hand and the object to be grabbed in the point cloud data according to the pixel coordinates identified in the color image;
and S312, the data processor calculates a space vector taking the three-dimensional coordinate of the prosthetic hand as a starting point and the three-dimensional coordinate of the object to be grabbed as an end point, the vector is converted into a space coordinate system taking the ground as a reference from a space coordinate system taking the wearing camera module as a reference, the components of the converted space vector in three dimensions become the front-back, left-right, up-down and up-down proximity sense information which can be understood by a user, and the information represents the direction and the distance of the object from the prosthetic hand.
In the method for guiding environmental perception by multi-modal information, the specific method for converting the object proximity sense orientation information into the feedback information understood by the user by the wearable force tactile feedback device in the step S3 is as follows:
s321, the data processor represents the direction information of the artificial limb and the object by the movement of the sliding block in four directions of the cross sliding table, wherein the distance information of the artificial limb and the object is represented by the moving speed of the sliding table; meanwhile, the azimuth information is also represented by the vibration of the unit array distributed in four directions, wherein the vibration frequency reflects the distance information of the artificial limb and the object;
s322, when the artificial hand completes the approaching task and obtains the heat sensation and force tactile information, the wearable force tactile feedback device can use a specific vibration prompt to complete approaching, prompt too high temperature and prompt to grab an object, or use a specific push-pull force mode to prompt the user of the direction, distance and speed of the object to be grabbed, and the voice interaction module also sends a specific sound effect prompt to complete approaching, prompt too high temperature and prompt to grab the object, or use three-dimensional surround sound to prompt the direction, distance and moving speed of the object;
s323, simultaneously, the orientation information of the artificial limb and the object is used for generating a spatial audio effect for guiding the user, the specific method is that a virtual scene containing the artificial limb and the object is built in a computer in real time, the orientation distance relation between the artificial limb and the object is restored, in the scene, a sound source is placed in the virtual object, a stereo sound pickup is placed in the virtual artificial limb, a simulated stereo sound received by the virtual sound pickup and generated by the virtual object is transmitted to the user, and the orientation and distance relation of the real object relative to the artificial limb is determined through the identification capability of the user on the spatial audio.
In the method for guiding environmental perception by multi-modal information, the specific method for the artificial hand to realize whether the user successfully identifies the approaching task and the grabbing task in the step S4 is as follows:
the method comprises the steps that a laser ranging sensor on the inner side of a palm of an artificial hand reads data in real time, when the data of the sensor is smaller than a certain threshold value, it is indicated that an object is very close to the palm and enters a grabbing space of the artificial hand, at the moment, it is indicated that an approaching task is successful, when the approaching is successful, an infrared temperature measuring sensor starts to detect the temperature of the object, if the temperature exceeds the set threshold value, a user is prompted to pay attention to high temperature by voice or sound effect, an electronic skin detects the contact and force application conditions of the object and the artificial hand in real time, when the user controls the artificial limb to complete grabbing actions, the contact force distribution between the artificial limb and the object is analyzed by the electronic skin, and when the total pressure exceeds the certain threshold value, it is indicated that the object grabbing task is successful.
As noted above, while the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limited thereto. Various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. A method for conducting multi-mode information guide environment perception by using a multi-mode information guide environment perception myoelectricity artificial limb system comprises an artificial hand with multi-mode perception capability, an array myoelectricity sensing module, a wearable camera module, a wearable force touch feedback device, a voice interaction module and a data processor; the artificial hand with the multi-mode perception capability is used for perception of force touch and temperature sense, the artificial hand is worn at the stump end of a user through an artificial limb receiving cavity, the array myoelectricity sensing module is integrated on the inner surface of the artificial limb receiving cavity, which is attached to the stump, the wearable camera module is used for realizing the perception of the proximity sense information of the artificial hand to an object to be grabbed, the wearable force touch feedback device is used for realizing the perception of the proximity sense, the temperature sense and the force touch information of the user on the artificial limb, the voice interaction module is used for the interaction between the user and the myoelectricity artificial limb system, a data processor is integrated in the artificial limb receiving cavity, and the data processor is used for decoding the myoelectricity intention of the user, controlling the artificial hand, performing the voice interaction and executing key programs in the system; the wearable camera module comprises an RGBD sensor used for transmitting a color image and point cloud data based on a binocular depth image in real time;
the method is characterized by comprising the following steps:
s1, firstly, a wearable camera module shoots an actual scene picture, then a data processor identifies an object of the picture, then the data processor transmits an identification result of an environmental object to a voice interaction module, and the voice interaction module feeds back the identification result to a user in a voice or sound effect mode;
s2, the user further issues a specific grabbing task through the voice interaction module according to the received environment object information;
s3, a user wears the artificial hand with the multi-mode perception capability to perform object approaching and grabbing tasks step by step, in the first step of object approaching, the wearable camera module captures the spatial coordinates of the object in real time, then the data processor changes the coordinate information into front-back, left-right, up-down approaching information which can be understood by the user and feeds the information back to the user by the voice interaction module and the wearable force tactile feedback device, and the approaching information perception of the artificial hand to the object to be grabbed is realized; after the approaching task is finished, a user controls the artificial hand to grab an object through the array myoelectricity sensing module integrated in the artificial limb;
s4, prompting results of approaching tasks and grabbing tasks by the voice interaction module and the wearable force and touch feedback device, and returning to the step S3 if any task is unsuccessful;
the specific process of the wearable camera module for realizing the information perception of the proximity sense of the prosthetic hand to the object to be grabbed in the step S3 is as follows:
s311, the positions of the object to be grabbed and the prosthetic hand in the color image are tracked in real time through the color image transmitted by the RGBD sensor, then the color image is aligned with the point cloud data, and the data processor determines the three-dimensional coordinates of the prosthetic hand and the object to be grabbed in the point cloud data according to the pixel coordinates identified in the color image;
s312, the data processor calculates a space vector taking the three-dimensional coordinate of the prosthetic hand as a starting point and the three-dimensional coordinate of the object to be grabbed as an end point, the vector is converted into a space coordinate system taking the earth as a reference from a space coordinate system taking the wearing camera module as a reference, components of the converted space vector in three dimensions become proximity sense information which can be understood by a user, namely the front-back, left-right, up-down and up-down, and the information represents the direction and the distance of the object from the prosthetic hand;
the wearable force tactile feedback device comprises one or two vibration unit arrays used for enabling a wearer to sense vibration, and each vibration unit array is formed by surrounding a large arm by four vibration motors for a circle and is uniformly distributed on the front side, the rear side, the outer side and the inner side of the large arm; the wearable force tactile feedback device also comprises a cross sliding table device capable of performing two-dimensional translation and used for enabling a wearer to sense two-dimensional push-pull force;
the specific method for converting the object proximity sense orientation information into the feedback information which can be understood by the user by the wearable force and touch feedback device in the step S3 is as follows:
s321, the data processor represents the direction information of the artificial limb and the object by the movement of the sliding block in four directions of the cross sliding table, wherein the distance information of the artificial limb and the object is represented by the moving speed of the sliding table; meanwhile, the azimuth information is also represented by the vibration of the unit array distributed in four directions, wherein the vibration frequency reflects the distance information of the artificial limb and the object;
s322, when the artificial hand completes the approaching task and obtains the heat sensation and force tactile information, the wearable force tactile feedback device can use a specific vibration prompt to complete approaching, prompt too high temperature and prompt to grab an object, or use a specific push-pull force mode to prompt the user of the direction, distance and speed of the object to be grabbed, and the voice interaction module also sends a specific sound effect prompt to complete approaching, prompt too high temperature and prompt to grab the object, or use three-dimensional surround sound to prompt the direction, distance and moving speed of the object;
s323, simultaneously, the orientation information of the artificial limb and the object is used for generating a spatial audio effect for guiding the user, the specific method is that a virtual scene containing the artificial limb hand and the object is built in a computer in real time, the orientation distance relation between the artificial limb hand and the object is restored, in the scene, a sound source is placed in the virtual object, a stereo sound pickup is placed in the virtual artificial limb, the simulated stereo sound generated by the virtual object and received by the virtual sound pickup is transmitted to the user, and the orientation and distance relation of the real object relative to the artificial limb is determined through the identification capability of the user on the spatial audio.
2. The method for guiding environmental awareness through multi-modal information according to claim 1, wherein the prosthetic hand with multi-modal awareness comprises an electronic skin glove, the inner side of the electronic skin glove comprises an infrared temperature measurement sensor and a laser distance measurement sensor, and a signal acquisition board is arranged on the electronic skin glove and used for connecting data lines of the infrared temperature measurement sensor and the laser distance measurement sensor.
3. The method of claim 1, wherein the approaching and grabbing steps of steps S3 and S4 further comprise the step of optimizing perceptual-coding parameters of the haptic feedback device using reinforcement learning.
4. The method for guiding environmental awareness by multimodal information according to claim 1, wherein the specific process of step S1 is:
s11, capturing a picture by using an RGBD sensor, extracting pixel coordinates and contours of each object in the picture by using an online or offline general object identification method, and identifying the object;
and S12, broadcasting the general direction and the specific identity information of the object to a user at double speed in a short language form by adopting a voice form.
5. The method for guiding environmental awareness through multi-modal information according to claim 1, wherein the specific method for the prosthetic hand to recognize whether the user successfully performs the approaching task and the grasping task in step S4 is as follows:
the laser ranging sensor on the inner side of the palm of the prosthetic hand reads data in real time, when the data of the sensor is smaller than a certain threshold value, the fact that an object is very close to the palm is indicated, the object enters a grabbing space of the prosthetic hand, the approaching task is indicated to be successful at the moment, when the approaching is successful, the infrared temperature measuring sensor starts to detect the temperature of the object, if the temperature exceeds the set threshold value, a user is prompted to pay attention to high temperature by voice or sound effect, the electronic skin detects the contact and force application conditions of the object and the prosthetic hand in real time, when the user controls the prosthetic hand to complete grabbing action, the electronic skin analyzes contact force distribution of the prosthetic hand and the object, and when the total pressure exceeds the certain threshold value, the fact that the object grabbing task is successful is indicated at the moment.
CN202011434904.7A 2020-12-10 2020-12-10 Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method Active CN112587285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011434904.7A CN112587285B (en) 2020-12-10 2020-12-10 Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011434904.7A CN112587285B (en) 2020-12-10 2020-12-10 Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method

Publications (2)

Publication Number Publication Date
CN112587285A CN112587285A (en) 2021-04-02
CN112587285B true CN112587285B (en) 2023-03-24

Family

ID=75191453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011434904.7A Active CN112587285B (en) 2020-12-10 2020-12-10 Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method

Country Status (1)

Country Link
CN (1) CN112587285B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113180894B (en) * 2021-04-27 2022-02-11 浙江大学 Visual intelligence-based hand-eye coordination method and device for multiple-obstacle person
KR102582861B1 (en) * 2021-07-15 2023-09-27 중앙대학교 산학협력단 Prosthetic arm capable of automatic object recognition with camera and laser
CN114185426A (en) * 2021-10-28 2022-03-15 安徽工程大学 Fingertip wearable dual-channel interaction device
CN217793747U (en) * 2021-12-22 2022-11-15 简伟龙 Intelligent blind guiding device
CN114681169B (en) * 2022-03-02 2023-04-18 中国科学院深圳先进技术研究院 Myoelectricity control tactile feedback artificial hand
CN117001715A (en) * 2023-08-30 2023-11-07 哈尔滨工业大学 Intelligent auxiliary system and method for visually impaired people

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5888213A (en) * 1997-06-06 1999-03-30 Motion Control, Inc. Method and apparatus for controlling an externally powered prosthesis
US20060189899A1 (en) * 2005-01-10 2006-08-24 Flaherty J Christopher Joint movement apparatus
DE102010006301A1 (en) * 2010-01-30 2011-04-21 Deutsches Zentrum für Luft- und Raumfahrt e.V. Device for reducing phantom pain during amputate, has computing device for controlling model of amputee member based on movement of member, and feedback device for producing feedback based on computed movements of model
CN102871784B (en) * 2012-09-21 2015-04-08 中国科学院深圳先进技术研究院 Positioning controlling apparatus and method
CN103271784B (en) * 2013-06-06 2015-06-10 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
US20180061276A1 (en) * 2016-08-31 2018-03-01 Intel Corporation Methods, apparatuses, and systems to recognize and audibilize objects
CN108742957B (en) * 2018-06-22 2021-02-09 上海交通大学 Multi-sensor fusion artificial limb control method
WO2020010328A1 (en) * 2018-07-05 2020-01-09 The Regents Of The University Of Colorado, A Body Corporate Multi-modal fingertip sensor with proximity, contact, and force localization capabilities
CN209422174U (en) * 2018-08-02 2019-09-24 南方科技大学 A kind of powered prosthesis Context awareness system merging vision
CN109172066B (en) * 2018-08-18 2019-12-20 华中科技大学 Intelligent prosthetic hand based on voice control and visual recognition and system and method thereof
CN109846582B (en) * 2019-03-20 2021-05-28 上海交通大学 Electrical stimulation system based on multi-mode perception feedback

Also Published As

Publication number Publication date
CN112587285A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112587285B (en) Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method
Kim et al. Continuous shared control for stabilizing reaching and grasping with brain-machine interfaces
Liu et al. Development of an environment-aware locomotion mode recognition system for powered lower limb prostheses
CN109605385B (en) Rehabilitation assisting robot driven by hybrid brain-computer interface
Schröer et al. An autonomous robotic assistant for drinking
Mahmud et al. Interface for human machine interaction for assistant devices: A review
EP3549725A1 (en) Upper limb motion assisting device and upper limb motion assisting system
KR101343860B1 (en) Robot avatar system using hybrid interface and command server, learning server, and sensory server therefor
CN104524742A (en) Cerebral palsy child rehabilitation training method based on Kinect sensor
Baldi et al. Design of a wearable interface for lightweight robotic arm for people with mobility impairments
CN110507322B (en) Myoelectricity quantitative state evaluation system and method based on virtual induction
Wang et al. Human-centered, ergonomic wearable device with computer vision augmented intelligence for VR multimodal human-smart home object interaction
Maksud et al. Low-cost EEG based electric wheelchair with advanced control features
Martin et al. A novel approach of prosthetic arm control using computer vision, biosignals, and motion capture
CN108646915A (en) The method and system of object is captured in conjunction with three-dimensional eye tracking and brain-computer interface control machinery arm
CN109753153B (en) Haptic interaction device and method for 360-degree suspended light field three-dimensional display system
Hu et al. StereoPilot: A wearable target location system for blind and visually impaired using spatial audio rendering
CN111249112A (en) Hand dysfunction rehabilitation system
CN113876556A (en) Three-dimensional laser scanning massage robot system
Lenhardt et al. An augmented-reality based brain-computer interface for robot control
CN115033101A (en) Fingertip texture touch feedback device based on air bag driving
Law et al. A cap as interface for wheelchair control
Gupta MAC-MAN
Ma et al. Sensing and force-feedback exoskeleton robotic (SAFER) glove mechanism for hand rehabilitation
EP3966664B1 (en) Virtual, augmented and mixed reality systems with physical feedback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant