CN113942009B - Robot bionic hand grabbing method - Google Patents

Robot bionic hand grabbing method Download PDF

Info

Publication number
CN113942009B
CN113942009B CN202111070054.1A CN202111070054A CN113942009B CN 113942009 B CN113942009 B CN 113942009B CN 202111070054 A CN202111070054 A CN 202111070054A CN 113942009 B CN113942009 B CN 113942009B
Authority
CN
China
Prior art keywords
target object
neural network
robot
network model
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111070054.1A
Other languages
Chinese (zh)
Other versions
CN113942009A (en
Inventor
丁梓豪
陈国栋
王振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202111070054.1A priority Critical patent/CN113942009B/en
Publication of CN113942009A publication Critical patent/CN113942009A/en
Application granted granted Critical
Publication of CN113942009B publication Critical patent/CN113942009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The present disclosure provides a robot bionic hand grasping method, including: acquiring a target object image; analyzing the target object image to obtain visual information of the target object; determining vision grabbing parameters of the bionic hand of the robot based on the vision information of the target object; grabbing a target object based on the vision grabbing parameters of the bionic hand of the robot; obtaining the touch information and the grabbing result of the target object; acquiring soft and hard attribute data of the target object based on the touch information of the target object and the visual information of the target object; and adjusting the touch grabbing parameters of the bionic hand of the robot and continuously grabbing the bionic hand based on the soft and hard attribute data of the target object.

Description

Robot bionic hand grabbing method
Technical Field
The disclosure relates to a robot bionic hand grabbing method, and belongs to the technical field of robots.
Background
With the continuous development of robot technology, robots have been applied to various fields to replace people to complete complicated work.
When the robot is used in different fields, the work content is different, different paws need to be customized for the robot, and traditional robot paws are all based on manual teaching or visual guidance, and can not deal with objects made of different materials, so that the damage to soft materials is easily caused.
How to plan the correct operation action according to the hardness degree of the object, and ensure that the operated object does not fall off or be damaged, is a difficult point in the field of the current robot.
Disclosure of Invention
In order to solve one of the technical problems, the disclosure provides a robot bionic hand grabbing method.
According to an aspect of the present disclosure, there is provided a robot bionic hand grasping method, including:
acquiring a target object image;
inputting a target object image into a visual detection neural network model for identification, and obtaining visual information of a target object, wherein the visual information comprises a target object abscissa, a target object ordinate, a target object width, a target object height and a target object category;
determining the vision grabbing parameters of the bionic hand of the robot based on the vision information of the target object;
grabbing a target object based on the vision grabbing parameters of the bionic hand of the robot;
obtaining the tactile information and the grabbing result of the target object;
acquiring soft and hard attribute data of the target object based on the touch information of the target object and the visual information of the target object; and
adjusting the touch grabbing parameters of the bionic hand of the robot and continuously grabbing the bionic hand based on the soft and hard attribute data of the target object;
wherein, the parameter setting of the visual detection neural network model comprises: the number of convolution layers is 6, the number of pooling layers is 6, the training times is 10000, the number of training samples in each batch is 20, and the learning rate is 0.001;
the training process of the visual inspection neural network model comprises the following steps:
collecting a visual sample comprising: acquiring an image containing a target object by using a depth camera, performing target preprocessing on the image, and acquiring image tag information, wherein the preprocessing comprises zooming the image containing the target object, and the acquiring of the image tag information comprises acquiring a target object abscissa, a target object ordinate, a target object width, a target object height and a target object category in the image;
inputting a training set of the collected visual samples into a visual detection neural network model for training to obtain a trained visual detection neural network model; and
inputting the collected test set of the visual sample into a visual detection neural network model for test verification, and obtaining an available visual detection neural network model after the verification is passed;
the acquiring of the tactile information and the grasping result of the target object includes: the method comprises the following steps that a touch sensor of a bionic hand of a robot acquires touch force data and a grabbing result in a corresponding time sequence from different touch point positions according to the time sequence, wherein the touch force data form two-dimensional arrays corresponding to time and the point positions;
the acquiring soft and hard attribute data of the target object based on the tactile information of the target object and the visual information of the target object comprises: forming a two-dimensional array corresponding to time and point positions and the type of the target object into the touch detection neural network model for recognition, and obtaining soft and hard attribute data of the target object;
the tactile detection neural network model is a recurrent neural network model, and the parameter setting of the recurrent neural network model comprises the following steps: the number of hidden layers is 64, the training times are 10000, the number of samples in each batch of training is 20, and the learning rate is 0.001; the haptic sensation detection neural network model establishing process comprises the following steps:
collecting touch samples, wherein the touch samples are collected for different contact points in a time sequence range, and the collection of the touch samples is carried out when one visual sample is collected;
inputting a training set of the collected tactile samples into a tactile detection neural network model for training to obtain a trained tactile detection neural network model; and
and inputting a test set of the collected tactile samples into the trained tactile detection neural network model for test verification, and obtaining an available tactile detection neural network model after the verification is passed.
According to the robot bionic hand grabbing method of at least one embodiment of the present disclosure, the acquiring of the image of the target object includes:
the target object image is acquired by a depth camera, which is mounted above the robot and is contactless with the robot.
According to the robot bionic hand grabbing method in at least one embodiment of the present disclosure, the determining the visual grabbing parameters of the robot bionic hand based on the visual information of the target object includes:
and determining the motion trail and the grabbing posture of the bionic hand of the robot based on the visual information of the target object.
According to the method for grabbing the bionic robot hand, the method for adjusting the tactile grabbing parameters of the bionic robot hand based on the soft and hard attribute data of the target object comprises the following steps:
and adjusting the contact force of the bionic hand of the robot and the target object based on the soft and hard attribute data of the target object.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is a schematic flow diagram of a robotic bionic hand grasping method according to one embodiment of the disclosure.
Fig. 2 is a schematic diagram of a visual inspection neural network architecture according to one embodiment of the present disclosure.
FIG. 3 is a schematic diagram of a haptic detection recurrent neural network architecture, according to one embodiment of the present disclosure.
Fig. 4 is a schematic flow diagram of a visual tactile sample acquisition method according to one embodiment of the present disclosure.
Fig. 5 is a schematic view of a metal-filled cassette according to one embodiment of the present disclosure.
Fig. 6 is a schematic view of an empty cassette according to one embodiment of the present disclosure.
FIG. 7 is a sample diagram of haptic data according to one embodiment of the present disclosure.
Fig. 8 is a sample detail schematic of haptic data according to one embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of a robotic biomimetic hand grasping system according to one embodiment of the present disclosure.
Description of the reference numerals
1000. Bionic robot hand grabbing system
1002. Bionic hand of robot
1004. Visual information acquisition module
1006. Tactile information acquisition module
1008. Visual analysis module
1010. Haptic analysis module
1012. Grabbing control module
1100. Bus line
1200. Processor with a memory having a plurality of memory cells
1300. Memory device
1400. Other circuits.
Detailed Description
The present disclosure will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant matter and not restrictive of the disclosure. It should be further noted that, for the convenience of description, only the portions relevant to the present disclosure are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. Technical solutions of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the illustrated exemplary embodiments/examples are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Accordingly, unless otherwise indicated, features of the various embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concept of the present disclosure.
The use of cross-hatching and/or shading in the drawings is generally used to clarify the boundaries between adjacent components. As such, unless otherwise specified, the presence or absence of cross-hatching or shading does not convey or indicate any preference or requirement for a particular material, material property, size, proportion, commonality among the illustrated components and/or any other characteristic, attribute, property, etc., of a component. Further, in the drawings, the size and relative sizes of components may be exaggerated for clarity and/or descriptive purposes. While example embodiments may be practiced differently, the specific process sequence may be performed in a different order than that described. For example, two processes described consecutively may be performed substantially simultaneously or in reverse order to that described. In addition, like reference numerals denote like parts.
When an element is referred to as being "on" or "over," "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to" or "directly coupled to" another element, there are no intervening elements present. For purposes of this disclosure, the term "connected" may refer to physically, electrically, etc., and may or may not have intermediate components.
For descriptive purposes, the present disclosure may use spatially relative terms such as "under ...,"' under ...below 8230; under 8230; above, on, above 8230; higher "and" side (e.g., as in "side wall)", etc., to describe the relationship of one component to another (other) component as shown in the figures. Spatially relative terms are intended to encompass different orientations of the device in use, operation, and/or manufacture in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "at ...below "may encompass both an orientation of" above "and" below ". Further, the devices may be otherwise positioned (e.g., rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising" and variations thereof are used in this specification, the stated features, integers, steps, operations, elements, components and/or groups thereof are stated to be present but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximate terms and not as degree terms, and as such, are used to interpret inherent deviations in measured values, calculated values, and/or provided values that would be recognized by one of ordinary skill in the art.
Fig. 1 is a schematic flow diagram of a robotic bionic hand grasping method according to one embodiment of the disclosure.
As shown in fig. 1, a method S100 for grasping a bionic hand by a robot includes:
s102: acquiring a target object image;
s104: analyzing the target object image to obtain visual information of the target object;
s106: determining vision grabbing parameters of the bionic hand of the robot based on the vision information of the target object;
s108: grabbing a target object based on the vision grabbing parameters of the bionic hand of the robot;
s110: obtaining the tactile information of the target object and capturing the result;
s112: acquiring soft and hard attribute data of the target object based on the touch information of the target object and the visual information of the target object; and the number of the first and second groups,
s114: and adjusting the touch grabbing parameters of the bionic hand of the robot and continuously grabbing the bionic hand based on the soft and hard attribute data of the target object.
Wherein acquiring an image of a target object comprises:
the target object image is acquired by a depth camera, which is mounted above the robot and is contactless with the robot.
Wherein, the analysis target object image obtains target object's visual information, includes:
and inputting the target object image into a visual detection neural network model for identification to obtain visual information of the target object, wherein the visual information comprises a target object abscissa, a target object ordinate, a target object width, a target object height and a target object category.
Wherein, the parameter setting of the neural network model of visual inspection includes: the number of convolution layers is 6, the number of pooling layers is 6, the training times is 10000, the number of training samples in each batch is 20, and the learning rate is 0.001;
the training process of the visual inspection neural network model comprises the following steps:
collecting a visual sample comprising: acquiring an image containing a target object by using a depth camera, performing target preprocessing on the image, and acquiring image tag information, wherein the preprocessing comprises zooming the image containing the target object, and the acquiring of the image tag information comprises acquiring the abscissa of the target object, the ordinate of the target object, the width of the target object, the height of the target object and the category of the target object in the image;
inputting a training set of the collected visual samples into a visual detection neural network model for training to obtain a trained visual detection neural network model; and (c) a second step of,
and inputting the collected test set of the visual sample into the visual detection neural network model for test verification, and obtaining the usable visual detection neural network model after the verification is passed.
Wherein, based on the visual information of the target object, the vision of the bionic hand of the robot is determined to grab the parameter, including:
and determining the motion trail and the grabbing posture of the bionic hand of the robot based on the visual information of the target object.
Wherein, obtain the tactile information of target object and, snatch the result, include:
the touch sensor of the simulated hand of the robot acquires contact force data for different contact point positions according to a time sequence, and the contact force data form a two-dimensional array corresponding to time and point positions according to a grabbing result in the corresponding time sequence.
Wherein, based on the tactile information of target object and the visual information of target object, acquire the soft or hard attribute data of target object, include:
and forming a two-dimensional array corresponding to time and point positions by using the contact force data of the target object, and inputting the category of the target object into a tactile detection neural network model for identification to obtain soft and hard attribute data of the target object.
Wherein, the touch detection neural network model is a recurrent neural network model,
the parameter setting of the recurrent neural network model comprises the following steps: the number of hidden layers is 64, the training times are 10000, the number of samples in each batch of training is 20, and the learning rate is 0.001;
the tactile sensation detection neural network modeling process comprises the following steps:
and collecting touch samples, wherein the touch samples are collected for different contact points in a time sequence range, and the collection of the touch samples is carried out when one visual sample is collected.
Inputting a training set of the collected tactile samples into a tactile detection neural network model for training to obtain a trained tactile detection neural network model; and the number of the first and second groups,
and inputting the collected test set of the touch samples into the trained touch detection neural network model for test verification, and obtaining the usable touch detection neural network model after the verification is passed.
Wherein, based on the soft or hard attribute data of target object, the touch of the bionic hand of adjustment robot snatchs the parameter, includes:
and adjusting the contact force between the bionic hand of the robot and the target object based on the soft and hard attribute data of the target object.
To sum up, through the acquisition and the analysis to the visual information of target object, can realize detecting, categorised and the location to the target, when the robot is close to the object after, just can obtain the soft or hard attribute of object through the sense of touch, then control the bionic hand of robot according to object material attribute and carry out different strategies of snatching, adjust the size of grabbing power and the position and the gesture of snatching to accomplish the stability of object, safe snatching.
Detect, classify and fix a position the target through visual information, when the robot is close to the object after, just can obtain the soft or hard attribute of object through the sense of touch, then carry out different snatchs tactics according to object material attribute control system, adjust the size of grabbing power and the position and the gesture of snatching to accomplish the stability of object, safe snatching. The method comprises the steps of establishing a visual touch state experience knowledge base by simulating the cognitive thought of human experts, perceiving the state of a robot in the working process by means of vision and force sense through interactive learning of the robot and the environment, accumulating and updating the robot experience knowledge base, monitoring the execution state of the robot in the working process, evaluating the completion state of the robot in the working process based on a deep learning method, and judging whether the robot succeeds or not. Therefore, the robot can be helped to replace people to better complete various complex works, and the production work efficiency is improved.
Fig. 2 is a schematic diagram of a visual inspection neural network structure according to an embodiment of the present disclosure.
As shown in fig. 2, image input: [512, 3], picture size 512 × 512, three channels, data dimensions for input and output of each convolutional and pooling layers are as follows:
first layer convolution layer, parameters: [5, 3,32], convolution kernel 5 × 5, input channel 3, output channel 32, activation function using relu, input: 512 × 512 × 3, output: 512 × 512 × 32;
the first layer of the pooling layer is treated in a largest pooling way, and the input is as follows: 512 × 512 × 32, output is 256 × 256 × 32;
second layer convolution layer, parameters: [5, 32,64], convolution kernel 5 × 5, input channel 32, output channel 64, activation function using relu, input: 256 × 256 × 32, outputs: 256 × 256 × 64;
the second layer of the pooling layer is subjected to maximum pooling treatment and input: 256 × 256 × 64, output 128 × 128 × 64;
third layer convolution layer, parameters: [3, 64,128], convolution kernel 5 × 5, input channel 64, output channel 128, activation function using relu, input: 128 × 128 × 64, outputs: 128 × 128 × 128;
the third layer of the pooling layer is subjected to maximum pooling treatment and input: 128 × 128 × 128, output of 64 × 64 × 128;
fourth layer convolution layer, parameters: [3, 64,128], convolution kernel 5 × 5, input channel 128, output channel 192, activation function using relu, input: 64 × 64 × 128, outputs: 64 × 64 × 192;
the fourth layer of the pooling layer is subjected to maximum pooling treatment and input: 64 × 64 × 192, with an output of 32 × 32 × 192;
fifth layer convolution layer, parameters: [3, 192,256], convolution kernel 3 × 3, input channel 192, output channel 256, activation function using relu, input: 32 × 32 × 192, outputs: 32 × 32 × 256;
a fifth pooling layer, maximum pooling treatment, input: 32 × 32 × 256, and 16 × 16 × 256 outputs;
sixth layer of convolutional layer, parameters: [3, 256,512], convolution kernel 3 × 3, input channel 256, output channel 512, activation function using relu, input: 16 × 16 × 256, output: 16X 512;
the sixth layer of the pooling layer is treated in a largest pooling way and input: 16 × 16 × 512, output is 8 × 8 × 512;
and the full connection layer 1 is used for flattening the last output result and inputting: 8 × 8 × 512, output: 1024;
and (4) outputting the object types and inputting the object types by the full connection layer 2: 1024, outputting: n (custom, equal to the number of categories of samples collected).
Fig. 3 is a schematic structural diagram of a haptic sense recurrent neural network provided in accordance with an embodiment of the present disclosure.
As shown in fig. 3, a haptic detection recurrent neural network structure includes:
network input layer: x, here selected 25 consecutive time series;
hidden layer: the characteristic dimension of the hidden layer determines the dimension of the hidden state hidden _ state, and can be simply regarded as constructing a weight, wherein the hidden layer is selected to be 64; and the number of the first and second groups,
LSTM layer: the number of LSTM layers is default to 1.
In conclusion, the invention constructs the visual touch state experience knowledge base by simulating the cognitive thought of human experts, and through interactive learning between the robot and the environment, the state of the robot in the working process is sensed by means of vision and force sense, the robot experience knowledge base is accumulated and updated, the execution state of the robot in the working process is monitored, the completion state of the robot in the working process is evaluated based on a deep learning method, and whether the robot succeeds or not is judged. The invention can help the robot to replace people to complete various complex works and improve the production work efficiency.
Fig. 4 is a schematic structural diagram of a robotic biomimetic hand grasping system according to one embodiment of the present disclosure.
The apparatus may include corresponding means for performing each or several of the steps of the flowcharts described above. Thus, each step or several steps in the above-described flow charts may be performed by a respective module, and the apparatus may comprise one or more of these modules. The modules may be one or more hardware modules specifically configured to perform the respective steps, or implemented by a processor configured to perform the respective steps, or stored within a computer-readable medium for implementation by a processor, or by some combination.
The hardware architecture may be implemented with a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. The bus connects together various circuits including one or more processors, memories, and/or hardware modules. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one connection line is shown, but no single bus or type of bus is shown.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the implementations of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied in a machine-readable medium, such as a memory. In some embodiments, some or all of the software program may be loaded and/or installed via memory and/or a communication interface. When the software program is loaded into memory and executed by a processor, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
The logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in the memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps of the method implementing the above embodiments may be implemented by hardware that is instructed to implement by a program, which may be stored in a readable storage medium, and when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
Fig. 4 is a schematic flow diagram of a visual tactile sample acquisition method according to one embodiment of the present disclosure.
As shown in fig. 4, the visual tactile sample acquisition method S200 includes:
s201: after the target object is placed, starting to execute an acquisition task;
s202: manually controlling the robot to move until the robot approaches the target object;
s203: shooting by a camera and acquiring visual information of a target object;
s204: pre-grabbing a target object, and acquiring touch data through touch induction;
s205: judging whether the grabbing is successful, if so, going to S206, otherwise, going to S204;
s206: moving the target object, and go to S207; and the number of the first and second groups,
s207: and judging whether the target object slides, if so, going to S205, and if not, going to S201.
Fig. 5 is a schematic view of a metal-filled cassette according to one embodiment of the present disclosure.
Fig. 6 is a schematic view of an empty cassette according to one embodiment of the present disclosure.
Fig. 7 is a sample diagram of haptic data according to one embodiment of the present disclosure.
FIG. 8 is a sample detail schematic of haptic data according to one embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of a robotic biomimetic hand grasping system according to one embodiment of the present disclosure.
As shown in fig. 9, a robotic biomimetic hand grasping system 1000 includes:
a robotic bionic hand 1002 for grasping an object;
the visual information acquisition module 1004 is used for acquiring the target object image, and is preferably a depth camera which is arranged above the robot and is not in contact with the robot;
the tactile information acquisition module 1006 is used for acquiring the tactile information of the target object, and preferably, the tactile information acquisition module is an array type tactile sensor which is arranged on a paw body of the bionic hand;
a vision analysis module 1008, configured to analyze the target object image to obtain a target object type, a target object position, and a target object size;
the touch analysis module 1010 is in communication connection with the bionic robot hand and is used for receiving and analyzing the data acquired by the touch information acquisition module to obtain the soft and hard attribute data of the material of the object; and the number of the first and second groups,
and the grabbing control module 1012 is in communication connection with the bionic robot hand and controls the bionic robot hand to grab the object based on the type of the target object, the position of the target object, the size of the target object and the soft and hard attribute data of the target object.
In the description herein, reference to the description of the terms "one embodiment/mode," "some embodiments/modes," "example," "specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment/mode or example is included in at least one embodiment/mode or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to be the same embodiment/mode or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/aspects or examples and features of the various embodiments/aspects or examples described in this specification can be combined and combined by one skilled in the art without conflicting therewith.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
It will be understood by those skilled in the art that the foregoing embodiments are merely for clarity of illustration of the disclosure and are not intended to limit the scope of the disclosure. Other variations or modifications may be made to those skilled in the art, based on the above disclosure, and still be within the scope of the present disclosure.

Claims (4)

1. A robot bionic hand grabbing method is characterized by comprising the following steps:
acquiring a target object image;
inputting a target object image into a visual detection neural network model for identification, and obtaining visual information of a target object, wherein the visual information comprises a target object abscissa, a target object ordinate, a target object width, a target object height and a target object category;
determining vision grabbing parameters of the bionic hand of the robot based on the vision information of the target object;
grabbing a target object based on the vision grabbing parameters of the bionic hand of the robot;
obtaining the tactile information and the grabbing result of the target object;
acquiring soft and hard attribute data of the target object based on the touch information of the target object and the visual information of the target object; and
based on the soft and hard attribute data of the target object, adjusting the touch grabbing parameters of the bionic hand of the robot and continuously grabbing;
wherein, the parameter setting of the visual detection neural network model comprises the following steps: the convolution layer number is 6, the pooling layer number is 6, the training times are 10000, the training sample number of each batch is 20, and the learning rate is 0.001;
the training process of the visual inspection neural network model comprises the following steps:
collecting a visual sample comprising: acquiring an image containing a target object by using a depth camera, performing target preprocessing on the image, and acquiring image tag information, wherein the preprocessing comprises zooming the image containing the target object, and the acquiring of the image tag information comprises acquiring a target object abscissa, a target object ordinate, a target object width, a target object height and a target object category in the image;
inputting a training set of the collected visual samples into a visual detection neural network model for training to obtain a trained visual detection neural network model; and
inputting the collected test set of the visual sample into a visual detection neural network model for test verification, and obtaining an available visual detection neural network model after the verification is passed;
the obtaining of the haptic information and the grasping result of the target object includes: the method comprises the steps that a touch sensor of a bionic hand of a robot acquires contact force data and a grabbing result in a corresponding time sequence from different contact point positions according to the time sequence, and the contact force data form a two-dimensional array corresponding to time and the point positions;
the acquiring soft and hard attribute data of the target object based on the tactile information of the target object and the visual information of the target object comprises: forming a two-dimensional array corresponding to time and point positions and the type of the target object into the touch detection neural network model for recognition, and obtaining soft and hard attribute data of the target object;
the tactile detection neural network model is a recurrent neural network model, and the parameter setting of the recurrent neural network model comprises the following steps: the hidden layer number is 64, the training times are 10000, the training sample number of each batch is 20, and the learning rate is 0.001; the haptic sensation detection neural network model establishing process comprises the following steps:
acquiring a touch sample, wherein the touch sample is acquired for different contact points in a time sequence range, and the acquisition of the touch sample is performed when one visual sample is acquired;
inputting a training set of the collected tactile samples into a tactile detection neural network model for training to obtain a trained tactile detection neural network model; and
and inputting the collected test set of the touch samples into the trained touch detection neural network model for test verification, and obtaining an available touch detection neural network model after the verification is passed.
2. The robotic biomimetic hand-grabbing method as recited in claim 1, wherein the acquiring an image of a target object comprises:
the target object image is acquired by a depth camera, which is mounted above the robot and is contactless with the robot.
3. The method of claim 1, wherein determining the visual grasping parameters of the biomimetic robotic hand based on the visual information of the target object comprises:
and determining the motion trail and the grabbing posture of the bionic hand of the robot based on the visual information of the target object.
4. The method according to claim 1, wherein adjusting the parameters of the robotic biomimetic hand for grabbing the tactile sensation based on the soft and hard attribute data of the target object comprises:
and adjusting the contact force of the bionic hand of the robot and the target object based on the soft and hard attribute data of the target object.
CN202111070054.1A 2021-09-13 2021-09-13 Robot bionic hand grabbing method Active CN113942009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111070054.1A CN113942009B (en) 2021-09-13 2021-09-13 Robot bionic hand grabbing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111070054.1A CN113942009B (en) 2021-09-13 2021-09-13 Robot bionic hand grabbing method

Publications (2)

Publication Number Publication Date
CN113942009A CN113942009A (en) 2022-01-18
CN113942009B true CN113942009B (en) 2023-04-18

Family

ID=79328152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111070054.1A Active CN113942009B (en) 2021-09-13 2021-09-13 Robot bionic hand grabbing method

Country Status (1)

Country Link
CN (1) CN113942009B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115876374B (en) * 2022-12-30 2023-12-12 中山大学 Flexible touch structure of nose of robot dog and method for identifying soft and hard attributes of contact

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007041295A2 (en) * 2005-09-30 2007-04-12 Irobot Corporation Companion robot for personal interaction
CN107891448A (en) * 2017-12-25 2018-04-10 胡明建 The design method that a kind of computer vision sense of hearing tactile is mutually mapped with the time
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning
CN110458281A (en) * 2019-08-02 2019-11-15 中科新松有限公司 The deeply study rotation speed prediction technique and system of ping-pong robot
CN111168686A (en) * 2020-02-25 2020-05-19 深圳市商汤科技有限公司 Object grabbing method, device, equipment and storage medium
CN113172629A (en) * 2021-05-06 2021-07-27 清华大学深圳国际研究生院 Object grabbing method based on time sequence tactile data processing

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4622384B2 (en) * 2004-04-28 2011-02-02 日本電気株式会社 ROBOT, ROBOT CONTROL DEVICE, ROBOT CONTROL METHOD, AND ROBOT CONTROL PROGRAM
JP2010149267A (en) * 2008-12-26 2010-07-08 Yaskawa Electric Corp Robot calibration method and device
US8706299B2 (en) * 2011-08-02 2014-04-22 GM Global Technology Operations LLC Method and system for controlling a dexterous robot execution sequence using state classification
EP3172526A1 (en) * 2014-07-22 2017-05-31 SynTouch, LLC Method and applications for measurement of object tactile properties based on how they likely feel to humans
CN106960099B (en) * 2017-03-28 2019-07-26 清华大学 A kind of manipulator grasp stability recognition methods based on deep learning
KR102275520B1 (en) * 2018-05-24 2021-07-12 티엠알더블유 파운데이션 아이피 앤드 홀딩 에스에이알엘 Two-way real-time 3d interactive operations of real-time 3d virtual objects within a real-time 3d virtual world representing the real world
CN108789384B (en) * 2018-09-03 2024-01-09 深圳市波心幻海科技有限公司 Flexible driving manipulator and object recognition method based on three-dimensional modeling
CN110091331A (en) * 2019-05-06 2019-08-06 广东工业大学 Grasping body method, apparatus, equipment and storage medium based on manipulator
CN111055279B (en) * 2019-12-17 2022-02-15 清华大学深圳国际研究生院 Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111444459A (en) * 2020-02-21 2020-07-24 哈尔滨工业大学 Method and system for determining contact force of teleoperation system
CN111590611B (en) * 2020-05-25 2022-12-02 北京具身智能科技有限公司 Article classification and recovery method based on multi-mode active perception
CN112668607A (en) * 2020-12-04 2021-04-16 深圳先进技术研究院 Multi-label learning method for recognizing tactile attributes of target object
CN112388655B (en) * 2020-12-04 2021-06-04 齐鲁工业大学 Grabbed object identification method based on fusion of touch vibration signals and visual images
CN113232019A (en) * 2021-05-13 2021-08-10 中国联合网络通信集团有限公司 Mechanical arm control method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007041295A2 (en) * 2005-09-30 2007-04-12 Irobot Corporation Companion robot for personal interaction
CN107891448A (en) * 2017-12-25 2018-04-10 胡明建 The design method that a kind of computer vision sense of hearing tactile is mutually mapped with the time
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning
CN110458281A (en) * 2019-08-02 2019-11-15 中科新松有限公司 The deeply study rotation speed prediction technique and system of ping-pong robot
CN111168686A (en) * 2020-02-25 2020-05-19 深圳市商汤科技有限公司 Object grabbing method, device, equipment and storage medium
CN113172629A (en) * 2021-05-06 2021-07-27 清华大学深圳国际研究生院 Object grabbing method based on time sequence tactile data processing

Also Published As

Publication number Publication date
CN113942009A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN109117826B (en) Multi-feature fusion vehicle identification method
Kucukoglu et al. Application of the artificial neural network method to detect defective assembling processes by using a wearable technology
CN103068535B (en) The method of physical object is selected in robot system
CN111860588A (en) Training method for graph neural network and related equipment
CN113942009B (en) Robot bionic hand grabbing method
CN105654066A (en) Vehicle identification method and device
Bhagavathi et al. An automatic system for detecting and counting RBC and WBC using fuzzy logic
CN111078552A (en) Method and device for detecting page display abnormity and storage medium
CN110463376B (en) Machine plugging method and machine plugging equipment
Kim et al. Contact localization and force estimation of soft tactile sensors using artificial intelligence
CN111008643A (en) Image classification method and device based on semi-supervised learning and computer equipment
Taddeucci et al. An approach to integrated tactile perception
CN112991343B (en) Method, device and equipment for identifying and detecting macular region of fundus image
GB2604706A (en) System and method for diagnosing small bowel cleanliness
JP2982814B2 (en) Adaptive learning type general-purpose image measurement method
US20210272429A1 (en) Artificial intelligence based motion detection
CN115035601A (en) Jumping motion recognition method, jumping motion recognition device, computer equipment and storage medium
CN110673642B (en) Unmanned aerial vehicle landing control method and device, computer equipment and storage medium
CN115114956A (en) Fault diagnosis method and system
CN114582012A (en) Skeleton human behavior recognition method, device and equipment
CN113172663A (en) Manipulator grabbing stability identification method and device and electronic equipment
CN113858238B (en) Robot bionic hand and grabbing method and system
CN113807204A (en) Human body meridian recognition method and device, equipment and storage medium
CN112001896A (en) Thyroid gland border irregularity detection device
ALSAADI et al. Auto Animal Detection and Classification among (Fish, Reptiles and Amphibians Categories) Using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant