CN114131635A - Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception - Google Patents

Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception Download PDF

Info

Publication number
CN114131635A
CN114131635A CN202111492543.6A CN202111492543A CN114131635A CN 114131635 A CN114131635 A CN 114131635A CN 202111492543 A CN202111492543 A CN 202111492543A CN 114131635 A CN114131635 A CN 114131635A
Authority
CN
China
Prior art keywords
manipulator
control
degree
information
grasping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111492543.6A
Other languages
Chinese (zh)
Inventor
李可
胡元栋
李光林
魏娜
田新诚
李贻斌
宋锐
侯莹
何文晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202111492543.6A priority Critical patent/CN114131635A/en
Publication of CN114131635A publication Critical patent/CN114131635A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/081Touching devices, e.g. pressure-sensitive
    • B25J13/082Grasping-force detectors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/085Force or torque sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • B25J15/10Gripping heads and other end effectors having finger members with three or more finger members
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention provides a multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual sense and tactile sense active perception, which comprises an electroencephalogram signal acquisition module, a grasping module and a control module, wherein the electroencephalogram signal acquisition module is used for acquiring electroencephalograms of a user; the visual information acquisition module is used for acquiring visual information; the multi-degree-of-freedom auxiliary grasping outer limb robot is provided with a multi-degree-of-freedom mechanical hand, and sensors are respectively arranged on five fingers of the mechanical hand and used for detecting corresponding touch information; the control system is configured to extract the position of an object to be grasped and the position to be selected from the visual information, process and analyze the electroencephalogram signals, acquire a movement intention, control the mechanical arm to execute a grasping action according to the movement intention so as to move the target object to the target position, receive tactile information fed back by the mechanical arm in the grasping execution process, and control the grasping force of the mechanical arm according to the difference value between the tactile information and a preset threshold value. The invention effectively fuses the movement intention, the machine vision and the machine touch to establish the active perception of the grasped object and the environment, thereby realizing the grasping control of the external limbs.

Description

Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception
Technical Field
The invention belongs to the technical field of auxiliary robots, and particularly relates to a multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active sensing.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The robot is used for assisting the disabled to perform the hand stretching and holding movement operations required by daily activities, and is widely applied in recent years. Input devices such as a joystick, a keyboard or a touch pad which are commonly used for controlling the robot are not suitable for physically disabled people. The brain-computer interface technology is a novel human-computer interaction mode which is emerging in recent years. Brain signals are recorded and interpreted through technologies such as electroencephalograms, direct control over the robot can be achieved without depending on traditional input equipment, and recognition of movement intentions is achieved through real-time and accurate decoding and classification of the electroencephalogram signals. When the robot faces to actual operability tasks, because the robot faces to objects with different shapes and complex and changeable environments, the real-time decoding and accurate classification of electroencephalogram still remain one of key technical problems which need continuous attention and are important to break through.
In order to make the robot better assist human life, it is also important to make the robot have active perception capability. The core of active perception is that the robot receives environment information passively, and collects and integrates the environment information by utilizing multi-modal sensors such as visual touch and the like, so that active cognition, active understanding and active adaptation to the environment are formed. The acquisition of visual information is to use camera and computer to replace human eyes to identify, track and measure the target, and further to do graphic processing, to provide the computer with usable digital information. These digital information include not only two-dimensional pictures but also three-dimensional scenes, video sequences, and the like. Through the digital information, the robot can fully know the environment and the operated object, and the robot can be conveniently controlled by calling forward experience. In addition to visual information, the robot needs to obtain a touch perception capability when touching an object.
Humans are able to efficiently perform their tasks, relying in large part on mechanoreceptors that are densely populated with human hands. These receptors can transmit contact sensing information such as pressure, vibration and the like when a human hand contacts an object to the central nervous system through a sensory nerve pathway. Through processing and analyzing the touch perception information, the central nervous system can establish the cognition of key information such as contact position, magnitude and direction of contact force, shape, weight, center of mass, surface texture, friction coefficient and the like of the manipulated object. This information allows a human being to operate and recognize an object with great skill only by the sense of touch. How to enable robots to have the same tactile mechanism is a hot spot in current robot research, and the core of the robot is to accurately identify contact information between fingers and objects in real time when an executing mechanism (such as a manipulator) of the robot grips the objects with different sizes and shapes, and establish contact cognition through comprehensive analysis and judgment, so that the motion is controlled through higher-level decision.
The vision and touch of the robot play an important role in the process of actively sensing the environment by the robot. However, how to effectively fuse the perception information of the two different modalities, how to realize motion decision and control of the robot by analyzing and fusing the visual and tactile information and combining the subjective will of the disabled, and these problems are key problems that restrict the robot from being used for the disabled to take care of the life.
Disclosure of Invention
The invention aims to solve the problems and provides a multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual sense and tactile sense active sensing.
According to some embodiments, the invention adopts the following technical scheme:
a multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception comprises:
the electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals of a user;
the visual information acquisition module is used for acquiring visual information;
the robot comprises a multi-degree-of-freedom auxiliary grasping outer limb robot and a multi-degree-of-freedom mechanical hand, wherein sensors are respectively arranged on five fingers of the mechanical hand and used for detecting corresponding touch information;
the control system is configured to extract the position of an object to be grasped and the position to be selected from the visual information, process and analyze the electroencephalogram signals, acquire a movement intention, control the mechanical arm to execute a grasping action according to the movement intention so as to move the target object to the target position, receive tactile information fed back by the mechanical arm in the grasping execution process, and control the grasping force of the mechanical arm according to the difference value between the tactile information and a preset threshold value.
As an alternative embodiment, the process of processing and analyzing the brain electrical signals by the control system comprises the following steps: after filtering the electroencephalogram signals, extracting the characteristics of the set frequency band, classifying the extracted characteristics, determining the movement intention, and combining preset manipulator control instructions according to the movement intention.
As an alternative embodiment, the movement intent includes a target object, a target location, and an action instruction.
As an alternative embodiment, the visual information acquisition module comprises an imaging device arranged in front of the user for detecting the object to be gripped and the environmental information.
As an alternative embodiment, the specific process of the control system extracting the position of the object to be gripped and the position to be selected from the visual information includes: identifying an image acquired by an imaging device, determining an approximate area of each position to be selected, and extracting coordinates of each position to be selected; extracting key points from point cloud data of an area where an object to be grasped is located, calculating a three-dimensional rapid point feature histogram of the key points, describing the relative direction of a normal between the two points, comparing the histogram with a histogram of a target to be identified possibly of a known model to obtain a point-to-point corresponding relation, and determining the position of the object to be grasped.
Further, when the control system extracts the position of the object to be gripped and the position to be selected from the visual information, the position of the manipulator needs to be determined, and the position of the manipulator is determined by a positioning module on the manipulator;
or extracting key points from the point cloud data of the area where the visual information manipulator is located, calculating a three-dimensional fast point feature histogram of the key points, describing the relative direction of the normal between the two points, comparing the histogram with the histogram of the target to be identified of the known model to obtain the point-to-point corresponding relation, and determining the position of the manipulator.
In an alternative embodiment, the control system takes visual information and movement intention as priority control input before the manipulator performs the gripping action, the manipulator selects a gripping control mode according to a classification result after decoding the movement intention, calculates the distance and the pose of the manipulator relative to the target object by combining the visual information, and determines the movement path of the manipulator according to the distance and the pose.
In an alternative embodiment, when the manipulator performs the gripping action, the control system takes the tactile information fed back by the manipulator as a priority control input, and completes the gripping control of the object by combining with the inherent motion mode preset by the manipulator.
As an alternative embodiment, in the process that the manipulator performs the gripping action and moves the target object, the control system uses the movement intention and the visual information acquired in real time as a feedforward control basis, and uses the manipulator touch information as a feedback control source, so as to realize the movement control of the target object.
In an alternative embodiment, the sensors are force sensors, and the control system compares the detection value of each force sensor with its threshold value, and when the difference exceeds a set range, adjusts the motion of the corresponding manipulator finger to increase or decrease the gripping force until the difference between the detection value and its threshold value is within the set range.
Compared with the prior art, the invention has the beneficial effects that:
the invention effectively fuses machine vision and machine touch to establish active perception of the grasped object and environment, thereby realizing grasping control of the external limb. The invention can provide important assistance for personnel with inconvenient limb activities, provides important technical support for assisting the required personnel to complete daily life, and has wide application value.
In the implementation process, the control logics of the visual information, the tactile information and the user movement intention are organically coordinated according to different process links, different control modes are adopted according to different grasping stages, high-efficiency operation can be realized, and the working efficiency and the accuracy of the whole system are ensured.
According to the invention, the pressure sensor is pasted on each fingertip at the tail end of the outer limb manipulator, a proper threshold value is set for the sensor, the measurement value during the gripping is monitored in real time, the finger position is adjusted, finally the measurement value of the pressure sensor reaches the vicinity of the set threshold value, and the gripping tends to personification in the gripping process is ensured.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic diagram of a system operating environment;
FIG. 2 is a schematic diagram of the overall control strategy of the system;
FIGS. 3(a) and (b) are decoding diagrams of the motion intentions;
FIG. 4 is a schematic view of tactile detection;
FIG. 5 is a block diagram of a control strategy for the outer limb;
fig. 6 shows the overall flow chart.
Wherein, 1 is an external limb mechanical arm of 7DO-F, and the tail end is provided with a mechanical arm; 2, a force sensor is arranged at the tail end of the touch sensor for real-time touch detection; 3, an electroencephalogram acquisition device for acquiring electroencephalogram signals of a user; 4, imaging equipment for acquiring object and environment information in real time; 5. 6 and 7 are respectively a handbag for holding objects, a mobile phone and a water cup.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
A brain-control multi-degree-of-freedom auxiliary grasping system integrating visual sense and tactile sense active perception mainly comprises the following parts:
the method includes the steps of collecting electroencephalogram signals, preprocessing and feature extraction, obtaining Spatial features of different frequency bands by adopting a Filter Bank Common Spatial Pattern (FBCSP) algorithm, and finally classifying the processed signals by adopting Linear Discriminant Analysis (LDA).
The second part is to obtain visual information and pass this information to the control system for use. In order to properly interact with the user and the environment, the robotic system needs to perceive them. Firstly, a camera is used for shooting an experimental scene in real time, and then an object to be operated is identified in real time from the acquired RGB-D data. The homogeneous transformation matrix between the frame fixed on the camera (which may be replaced by an RGB sensor in some embodiments) and the frame fixed on the robot base is preliminarily calibrated, and the detection of the position and the direction of the marker in the scene can be realized by using the database.
The third part is the processing of tactile information, which the outer limb acquires and makes the grip personified. In order to enable the outer limb to have a better performance in the gripping process, a pressure sensor is pasted on each fingertip at the tail end of the manipulator of the outer limb, a proper threshold value is set for the sensor, the measurement value during gripping is monitored in real time, the position of the finger is adjusted, and finally the measurement value of the pressure sensor reaches the set threshold value.
The fourth part is that the visual information, the tactile information and the movement intention are combined to cooperatively send commands to control the movement of the outer limb. The method comprises the steps of writing required instructions including object grasping, moving to a desired position, starting and stopping control and the like into an upper computer or a control system in advance, and selecting and combining after decoding of the movement intention is completed, so that the outer limbs are mobilized to realize corresponding movement control. The joint angle limitation is set in the motion process of the outer limbs, so that the continuity in the motion is ensured.
In order to make the grasping process of the outer limb coherent and efficient, the control logics of visual information, tactile information and user intention need to be coordinated well, and different control modes are adopted according to different grasping stages: (1) before an object is grasped, establishing control input taking visual information and user intention as main control input, selecting a specific grasping control mode for grasping the outer limb according to a classification result obtained after decoding the movement intention at the moment, realizing feedforward control based on the movement mode, and simultaneously calculating the distance, the pose, the movement path and the like of the outer limb relative to the grasped object by combining the visual information to realize control based on the vision; (2) when the external limb grasps the object, feedback control input with tactile information as a main information source is adopted, and grasping control of the object is completed by combining an inherent motion mode arranged in the external limb; (3) in the moving process of the outer limb gripping object, the user intention and the visual information acquired in real time are used as a feedforward control basis, and the real-time acquired tactile information is used as a feedback control source, so that the movement control of the gripped object is realized.
As a typical embodiment, as shown in fig. 1, a brain-controlled multi-degree-of-freedom assisted grasping system integrating visual and tactile active sensing includes an external limb mechanical arm (or called as an external limb or a mechanical arm), an executing end of the external limb mechanical arm, and a force sensor disposed on each finger;
the electroencephalogram acquisition device is used for acquiring electroencephalogram signals of a user;
and the imaging equipment acquires the object to be gripped and the environmental information in real time.
Firstly, the processing of brain electrical signals and the identification of movement intentions comprise: signal preprocessing, feature extraction, intention identification and control command selection. The first step of the pretreatment is to filter the 32-channel electroencephalogram signals by adopting a 50Hz notch filter and a 0.5Hz high-pass filter. In the aspect of feature extraction, as the first stage of the FBCSP algorithm, 4 band-pass filters are adopted for filtering, and the spatial features of alpha and beta frequency bands of 5-10Hz, 10-15Hz, 15-20Hz and 20-25Hz of each channel are obtained.
Discriminant features associated with each athletic intent are extracted from each preprocessed data window for classifier training and testing. The second stage of the FBCSP algorithm is applied to the signal in each filtered band, and the spatial filter is designed to enhance the differences between the different types, different modes. Acquiring an N X T dimensional electroencephalogram signal X, wherein N is the number of channels, T is the number of samples, and calculating a spatial filter matrix W. The 10 types to be distinguished in this case are respectively that the desired object is a cup (X)1) Mobile phone (X)2) Handbag (X)3) Period of time ofThe expected position is a mouth (X)4) Ear, ear (X)5) Hand (X)6) Functional instruction is Start (X)7) Pause (X)8) Refresh (X)9) Stop (X)10). The normalized covariance matrix for each class is
Figure BDA0003398949570000101
trace(Xi) Expression solving matrix XiTrace of (a) refers to the sum of the elements of the main diagonal of the matrix.
The composite spatial covariance matrix is derived from the sum of these average normalized covariance matrices and can be decomposed into
Figure BDA0003398949570000102
Wherein U is0And a is the eigenvector matrix and the diagonal matrix of eigenvalues, respectively.
Transform (3) converts the mean normalized covariance matrix to (4)
Figure BDA0003398949570000103
Figure BDA0003398949570000104
Wherein SiIs calculated as (5), they have a common matrix of eigenvalues U, the sum of these eigenvalue matrices being the identification matrix.
Si=UAiUT,i∈[1,10] (5)
Figure BDA0003398949570000105
Obtaining a projection matrix of
W=UTP (7)
The original signal X is projected through a projection matrix W to obtain a characteristic matrix as follows
Z=WX (8)
The resulting signal Z has the same dimensions as X, the feature information is mainly concentrated in the head and tail components of the feature matrix, while the middle feature information is not significantly negligible, so the first m rows and the last m rows of Z (2m < N) are selected as features of the original input data. Therefore, only the variance of the components of the front m rows and the back m rows of Z is considered for feature extraction, and is defined as Zp
Calculating ZpThe variance of (2) is var (Z)p) And expressed by logarithmic normalization as
Figure BDA0003398949570000111
yiIs the feature matrix after the normalization of the ith sample.
And classifying the result by using the LDA as a classification method.
LDA is the projection of data in a low dimension, after which it is desirable that the projected points of each category of data are as close as possible, while the distance between the category centers of the different categories of data is as large as possible.
Given a data set of y ═ y1,y2,...,yi,...,y2m) For yiIn particular, the mean vector is μiThe covariance matrix is ∑i. Arbitrarily take two kinds of samples to be defined as X0、X1The projection of the central points of the two types of samples on the straight line is omegaTμ0、ωTμ1The covariance of the two samples is ωTΣ0ω、ωTΣ1ω。
To ensure that the projection points of the same kind of samples are as close as possible, it is necessary to obtain ω as small as possible1Value of
ω1=ωTΣ0ω+ωTΣ1ω (10)
To ensure that the projection points of the heterogeneous samples are divided as much as possibleBulk, it is desirable to obtain as much ω as possible2Value of
Figure BDA0003398949570000112
Thus maximizing the quotient J
Figure BDA0003398949570000113
Defining a divergence matrix within a class
Figure BDA0003398949570000121
Defining an inter-class divergence matrix
Sb=(μ01)(μ01)T (14)
In this embodiment, there are 10 classes, so the intra-class divergence matrix is
Figure BDA0003398949570000122
The inter-class divergence matrix is
Figure BDA0003398949570000123
Thus simplifying J to
Figure BDA0003398949570000124
Let ω be because the goal is to maximize JTSωω is 1, and is obtained by a lagrange multiplier method (18), and a matrix value of ω is obtained by further calculation.
Figure BDA0003398949570000125
After the value of omega is obtained through the processing, a sample set is mapped by using omega to obtain new samples with the best classification effect, the samples meet the condition that the projections of similar sample points are close, the projections of different sample points are dispersed as much as possible, and each dense projection point area corresponds to one class. Thus, the electroencephalogram classifier capable of distinguishing ten categories is obtained, and multi-classification of user intentions is realized.
In this embodiment, the classification results are respectively corresponding to three objects to be grasped, three alternative/candidate positions, and four function instructions. The three objects to be gripped are respectively: drinking cup, cell-phone, handbag, three alternative/positions of waiting to select are respectively: mouth, ear, hand, four kinds of functional instruction are respectively: refresh, start, pause, stop, as shown in fig. 3 (a).
Of course, in other embodiments, the object to be grasped, the alternative/candidate location, and the function instruction may all be changed according to the user requirement, and are not described herein again.
And after the movement intention generated by the user is obtained, combining preset control instructions of the outer limbs according to the classification result. For example, if the user desires to move the cup to the mouth, the user selects a combination of the instruction to grip the cup and the instruction to move to the mouth to wait for the next operation.
Active perception of visual information. Firstly, detecting the face position of a user, firstly, acquiring an image with full high-definition resolution from a sensor, detecting the positions of the mouth and the ears of the image on a two-dimensional plane, identifying the face in a scene by using Haar-like features (Haar), picking out two regions of the face image, wherein the two regions respectively comprise the mouth and the ears, and then, applying a Haar feature algorithm to find the coordinates of the mouth and the ears in the two regions. The second step is to acquire full high-count cloud data from the sensors to estimate the distance between the mouth and ears and the sensors, filter through a voxel grid filter in order to reduce the number of points that need to be computed, then extract points from selected regions in the image, compute the x, y, z coordinates of the center of the mouth and the center of the ears.
Then, the hand position of the user is detected, and the obtained image is processed. Firstly, detecting the region of a palm by using a skin color detection algorithm, expanding pixel points, avoiding cutting off finger parts, filtering the image after binarization processing to remove background noise, and selecting a maximum outline. The maximum outline is approximated to be a polygon, the position of the central point can be calculated, the distance between the palm and the sensor is estimated by combining the full-altitude counting cloud data, and finally the x, y and z coordinates of the central position of the palm are calculated.
And finally, identifying the object by using the point cloud database. Firstly, extracting key points from a scene point cloud with uniform sampling distance in an area where an object is located, then calculating a three-dimensional fast point feature histogram of the key points, describing the relative direction of a normal between the two points, and comparing the histogram with a histogram of a possible target to be identified of a prior known model to obtain a point-to-point corresponding relation. These correspondences are then merged to enhance the geometric consistency between them, and if a fixed reference is not found, the model is aligned with the instances in the scene. These correspondences are processed using a random sample consistency estimator to find the best transformation between them. Finally, hypothesis verification is performed, and the geometric information of the object is used to reduce the error.
After the processing, the position information of the object and the expected moving position information can be obtained, and the outer limb can start actual movement by combining the combined instruction selected by the movement intention.
Active perception of tactile information. The sensor structure used is a capacitive structure, as shown in fig. 4, the upper gray layer is made of a conductive material, and the middle black layer is made of an elastic insulating material. When force is applied to the upper layer, the middle layer deforms, so that the structural capacitance value formed by the middle layer changes, force value information is obtained, and the fact that the capacitance value and the force value form a certain linear relation in a certain range can be known according to experience.
For the sensed tactile information, the contact force of each five fingers when the human hand grips three objects is firstly collected and recorded as
Figure BDA0003398949570000141
Where i ∈ [1,5 ]]Representing 5 fingers. And then, the pressure sensor is used as a threshold value of the pressure sensor, the force sensor is arranged on the five fingers at the tail end of the outer limb manipulator, and the sensor data is monitored in real time when an object is grasped. When a certain sensor reaches a threshold value for the first time, finely adjusting the position of a fingertip motor: when the measured value is slightly larger than the threshold value, the motor rotates outwards to reduce the measured value; when the measurement is slightly less than the threshold, the motor spins in causing the measurement to increase. And finally, the measured value of the motor is always kept near the threshold value after the motor is at a certain position until all the sensors repeat the operation, and the detection is finished. The image of the force values detected in the actual process is shown in fig. 4.
Through the operation, the outer limb can adjust the gripping posture of the outer limb through actively sensing the touch information in the actual movement process, so that the gripping process is more anthropomorphic.
The embodiment establishes the active perception of fusion visual touch, and controls the multi-degree-of-freedom outer limb mechanical arm to carry out auxiliary grasping by combining a brain-computer interface so as to meet the grasping requirement of the disabled in daily life.
In the motion control of the outer limb, joint angle limitation is added besides the basic motion, so that the motion pause caused by singular points generated in the motion process is prevented. For a 7-degree-of-freedom outer limb robotic arm, given a task control objective, i.e., grasping an object and moving, the system state is σ (q), the joint state q,
Figure BDA0003398949570000151
the relationship between the system velocity and the task space velocity is
Figure BDA0003398949570000152
Wherein
Figure BDA0003398949570000153
Is a matrix of the jacobian matrix,
Figure BDA0003398949570000154
is the joint velocity vector.
After the corresponding position coordinates are obtained, the expected motion path can be obtained by combining the preset motion command, and the task value sigma is driven to the expected value sigma by means of a closed-loop inverse kinematics algorithmd
Figure BDA0003398949570000155
Where K is a positive definite matrix of the chosen gains,
Figure BDA0003398949570000161
is a task error, J*=JT(JJT)-1Is Moore-Penrose pseudo-inverse of J.
The control framework for the outer limb contains two feedforward control terms and one feedback control term. Two feed-forward terms are object information x after decoding of movement intentiontLocation information x in the environmentpThe feedback item is contact force information x in the gripping processfAs shown in detail in fig. 5. X is to bet、xpAdding control tasks to calculate correspondences
Figure BDA0003398949570000162
Calculated
Figure BDA0003398949570000163
To the controller, xfFor feedback control of the joint state, x, of a robot at the end of the outer limbt、xp、xfFinally outputting corresponding joint moment tau through combination, so that the outer limb makes corresponding movement.
There are different controls that work at different stages of the grip: (1) the feed forward term x before gripping the objecttAnd xpPlays a main control role, wherein xtIs responsible for providing the information of the shape, the mass center and the like of the object, xpThe robot is responsible for providing object position information and cooperatively controlling the outer limb to call the inherent movement mode to generate movement; (2) at the gripFeedback term x during object processfThe robot has a main control function, is responsible for providing contact force information of an object, and controls the manipulator at the tail end of the outer limb to adjust the corresponding posture so as to better grasp the object; (3) the feed forward term x during the movement of the gripping objectpAnd a feedback term xfPlays a main control role, wherein xpThe control module is responsible for providing the actual position information of the object which is expected by the user to be finally positioned, and controlling the outer limb gripping object to move to the corresponding position, xfThe device is responsible for providing contact force information of an object and ensuring that the outer limb has a continuous and stable gripping posture in the moving process. As shown in particular in fig. 5.
The overall process is as follows, the external limb acquires object and environment information in real time to wait for the user to participate, the user wears the electroencephalogram cap, after decoding the movement intention, the target and the position in the interactive interface select a corresponding area of a column to be highlighted, if the highlighted area is not the object or the expected position expected by the user, the operation is repeated, and the interactive interface is as shown in fig. 3 (b). And searching the corresponding object position and the expected position in the visual information acquired in real time after the selection is successful. At the moment, the outer limb obtains coordinate point information and calls the combined instruction to finish the action of grasping the object. When the external limb manipulator just contacts the object, the gripping posture is adjusted in real time by combining the sensed tactile information. After the gripping is completed, the movement is continued to the desired position. If the user needs the outer limb to stay at the current position in the moving process, a pause instruction is called after the user decodes the movement intention, the outer limb pauses the movement, and the next operation is waited.
In the process, the classification result corresponds to four functional instructions except that the three objects are respectively a water cup, a mobile phone, a handbag and three positions, namely a mouth, an ear and a hand. The start command represents a resumption of motion of the outer limb for use after the pause command; pause instructions represent that the user needs to pause the movement in some cases in the movement of the external limb; the refreshing instruction represents that the current task is ended and restarted, if the current task occurs in the target selection stage, the object is reselected to be grasped, and if the current task occurs in the position selection stage, the expected position is reselected; the stop command represents the outer limb stopping the current task and no longer moving for emergency situations. Specifically, as shown in FIG. 3 (a).
The system has the working flow as shown in fig. 6, the user wears the electroencephalogram cap, then grips the user according to the intention of the user and the active perception of the visual touch of the outer limb, and after the user moves the desired object to the desired position, one flow is finished.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. A multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile active perception is characterized in that: the method comprises the following steps:
the electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals of a user;
the visual information acquisition module is used for acquiring visual information;
the robot comprises a multi-degree-of-freedom auxiliary grasping outer limb robot and a multi-degree-of-freedom mechanical hand, wherein sensors are respectively arranged on five fingers of the mechanical hand and used for detecting corresponding touch information;
the control system is configured to extract the position of an object to be grasped and the position to be selected from the visual information, process and analyze the electroencephalogram signals, acquire a movement intention, control the mechanical arm to execute a grasping action according to the movement intention so as to move the target object to the target position, receive tactile information fed back by the mechanical arm in the grasping execution process, and control the grasping force of the mechanical arm according to the difference value between the tactile information and a preset threshold value.
2. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: the process of processing and analyzing the electroencephalogram signals by the control system comprises the following steps: after filtering the electroencephalogram signals, extracting the characteristics of the set frequency band, classifying the extracted characteristics, determining the movement intention, and combining preset manipulator control instructions according to the movement intention.
3. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: the movement intent includes a target object, a target location, and an action instruction.
4. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: the visual information acquisition module comprises an imaging device arranged in front of a user and is used for detecting an object to be gripped and environmental information.
5. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: the specific process of the control system for extracting the position of the object to be gripped and the position to be selected from the visual information comprises the following steps: identifying an image acquired by an imaging device, determining an approximate area of each position to be selected, and extracting coordinates of each position to be selected; extracting key points from point cloud data of an area where an object to be grasped is located, calculating a three-dimensional rapid point feature histogram of the key points, describing the relative direction of a normal between the two points, comparing the histogram with a histogram of a target to be identified possibly of a known model to obtain a point-to-point corresponding relation, and determining the position of the object to be grasped.
6. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 5, wherein: when the control system extracts the position of an object to be gripped and the position to be selected from the visual information, the position of the manipulator needs to be determined, and the position of the manipulator is determined by a positioning module on the manipulator;
or extracting key points from the point cloud data of the area where the visual information manipulator is located, calculating a three-dimensional fast point feature histogram of the key points, describing the relative direction of the normal between the two points, comparing the histogram with the histogram of the target to be identified of the known model to obtain the point-to-point corresponding relation, and determining the position of the manipulator.
7. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: the control system takes visual information and movement intentions as priority control input before the manipulator executes a gripping action, the manipulator selects a gripping control mode according to a classification result after the movement intentions are decoded, calculates the distance and the pose of the manipulator relative to a target object by combining the visual information, and determines a movement path of the manipulator according to the distance and the pose.
8. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: when the manipulator executes the gripping action, the control system takes the tactile information fed back by the manipulator as priority control input and completes the gripping control on the object by combining the preset inherent motion mode of the manipulator.
9. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: the control system takes the movement intention and the real-time acquired visual information as a feedforward control basis and the manipulator touch information as a feedback control source in the process of executing the gripping action by the manipulator and moving the target object, so that the movement control of the target object is realized.
10. The multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception as claimed in claim 1, wherein: the sensors are force sensors, the control system makes a difference between the detection value of each force sensor and the threshold value of the force sensor, when the difference value exceeds a set range, the control system adjusts the action of the corresponding manipulator finger and increases or decreases the gripping force until the difference value between the detection value and the threshold value is within the set range.
CN202111492543.6A 2021-12-08 2021-12-08 Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception Pending CN114131635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111492543.6A CN114131635A (en) 2021-12-08 2021-12-08 Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111492543.6A CN114131635A (en) 2021-12-08 2021-12-08 Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception

Publications (1)

Publication Number Publication Date
CN114131635A true CN114131635A (en) 2022-03-04

Family

ID=80385205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111492543.6A Pending CN114131635A (en) 2021-12-08 2021-12-08 Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception

Country Status (1)

Country Link
CN (1) CN114131635A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115463003A (en) * 2022-09-09 2022-12-13 燕山大学 Upper limb rehabilitation robot control method based on information fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1824472A (en) * 2002-10-29 2006-08-30 松下电器产业株式会社 Robot gripping control unit and robot gripping control technique
CN106671084A (en) * 2016-12-20 2017-05-17 华南理工大学 Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN106994689A (en) * 2016-01-23 2017-08-01 鸿富锦精密工业(武汉)有限公司 The intelligent robot system and method controlled based on EEG signals
CN109366508A (en) * 2018-09-25 2019-02-22 中国医学科学院生物医学工程研究所 A kind of advanced machine arm control system and its implementation based on BCI
CN208601545U (en) * 2018-06-21 2019-03-15 东莞理工学院 A kind of manipulator and robot with pressure perceptional function
WO2020094205A1 (en) * 2018-11-08 2020-05-14 Mcs Free Zone An enhanced reality underwater maintenance syestem by using a virtual reality manipulator (vrm)

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1824472A (en) * 2002-10-29 2006-08-30 松下电器产业株式会社 Robot gripping control unit and robot gripping control technique
CN106994689A (en) * 2016-01-23 2017-08-01 鸿富锦精密工业(武汉)有限公司 The intelligent robot system and method controlled based on EEG signals
CN106671084A (en) * 2016-12-20 2017-05-17 华南理工大学 Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN208601545U (en) * 2018-06-21 2019-03-15 东莞理工学院 A kind of manipulator and robot with pressure perceptional function
CN109366508A (en) * 2018-09-25 2019-02-22 中国医学科学院生物医学工程研究所 A kind of advanced machine arm control system and its implementation based on BCI
WO2020094205A1 (en) * 2018-11-08 2020-05-14 Mcs Free Zone An enhanced reality underwater maintenance syestem by using a virtual reality manipulator (vrm)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张娜,等: "精确抓握力量控制的脑动力学研究", 《中国生物医学工程学报》, vol. 39, no. 6, pages 711 - 718 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115463003A (en) * 2022-09-09 2022-12-13 燕山大学 Upper limb rehabilitation robot control method based on information fusion

Similar Documents

Publication Publication Date Title
Shi et al. Computer vision-based grasp pattern recognition with application to myoelectric control of dexterous hand prosthesis
Singha et al. Indian sign language recognition using eigen value weighted euclidean distance based classification technique
Ahuja et al. Static vision based Hand Gesture recognition using principal component analysis
US9734393B2 (en) Gesture-based control system
CN109993073B (en) Leap Motion-based complex dynamic gesture recognition method
CN112990074B (en) VR-based multi-scene autonomous control mixed brain-computer interface online system
Ahuja et al. Hand gesture recognition using PCA
Gandarias et al. Human and object recognition with a high-resolution tactile sensor
Qi et al. Computer vision-based hand gesture recognition for human-robot interaction: a review
Tang et al. Wearable supernumerary robotic limb system using a hybrid control approach based on motor imagery and object detection
Zhang et al. Robotic control of dynamic and static gesture recognition
CN114131635A (en) Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception
Nandwana et al. A survey paper on hand gesture recognition
Cognolato et al. Improving robotic hand prosthesis control with eye tracking and computer vision: A multimodal approach based on the visuomotor behavior of grasping
CN114495273A (en) Robot gesture teleoperation method and related device
Wang et al. EXGbuds: Universal wearable assistive device for disabled people to interact with the environment seamlessly
Wei et al. Fusing EMG and visual data for hands-free control of an intelligent wheelchair
Enikeev et al. Recognition of sign language using leap motion controller data
Jindal et al. A comparative analysis of established techniques and their applications in the field of gesture detection
Chaudhary Finger-stylus for non touch-enable systems
Huda et al. Real-time hand-gesture recognition for the control of wheelchair
Qiu et al. Research on Intention Flexible Mapping Algorithm for Elderly Escort Robot
Foresi et al. Human-robot cooperation via brain computer interface in assistive scenario
Foroutan et al. Control of computer pointer using hand gesture recognition in motion pictures
Sliman Ocular Guided Robotized Wheelchair for Quadriplegics Users

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination