CN111459278A - Robot grabbing state discrimination method based on touch array - Google Patents
Robot grabbing state discrimination method based on touch array Download PDFInfo
- Publication number
- CN111459278A CN111459278A CN202010252733.XA CN202010252733A CN111459278A CN 111459278 A CN111459278 A CN 111459278A CN 202010252733 A CN202010252733 A CN 202010252733A CN 111459278 A CN111459278 A CN 111459278A
- Authority
- CN
- China
- Prior art keywords
- grabbing
- layer
- model
- robot
- tactile
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
The utility model provides a robot snatchs state discrimination method based on tactile array, includes: step S1: constructing a tactile data set of a robot grabbing object; step S2: normalizing the data set constructed in the step S1; step S3: constructing a training model based on a multilayer perceptron; step S4: initializing training model parameters; step S5: performing model training and convergence based on the data set processed in the step S2 to obtain optimal parameters of the model; and step S6: the method can be applied to robot grabbing operation aiming at various objects, judges whether grabbing is successful according to grabbing force distribution, adjusts the grabbing force of the robot in real time, realizes high-accuracy robot grabbing operation, and has the grabbing success rate over 99%.
Description
Technical Field
The disclosure relates to the technical field of machine learning and deep learning, in particular to a robot grabbing state distinguishing method based on a touch array.
Background
The traditional discrimination aiming at the robot gripping force comprises the following two types: one is to use a preset force threshold to give control force that can achieve gripping of the object. The force threshold may be set empirically or based on robot dynamics to calculate a closed solution for the grasping force; one is to utilize the sensor to feed back the magnitude of the grabbing force to realize the force feedback control of the robot grabbing. The method for utilizing the sensor to carry out grabbing force feedback also comprises two types, namely indirect grabbing force feedback, which is based on a six-dimensional force sensor or a joint torque sensor, directly acquires six-dimensional force information and joint torque information of the robot end effector, and then indirectly solves the grabbing force of the end effector through robot dynamics. The grabbing force data obtained by the method comprises the measurement error and the dynamic model error of the sensor, so that the feedback result is insufficient in precision and is difficult to be suitable for accurate grabbing operation. One method is direct grabbing force feedback, and grabbing force information of a robot grabbing object is directly obtained through a pressure sensor or a touch sensor arranged on an end effector of the robot, so that whether grabbing is successful or not is directly judged according to feedback force.
The pressure sensor is used for single-point judgment, only the single-point contact condition of the robot and an object to be operated can be reflected, the contact force condition is difficult to reflect perfectly for the grabbing operation of surface contact under most conditions, so that the accuracy of judging whether grabbing is successful or not through the single-point pressure sensor is low, and unstable conditions such as slipping, rotating and the like are easy to generate. The array type touch sensor can effectively reflect the contact condition between the robot end effector and an object during grabbing, and feeds the contact condition back to the robot in a data matrix form, and whether grabbing is successful or not is judged through numerical analysis.
At present, grabbing discrimination methods are divided into two categories, namely a traditional machine learning (support vector machine and the like) based method and a deep convolutional neural network method, however, most of the relied data are single-point pressure sensor data, the scale of the used data set is small, the discrimination accuracy is low, the time consumption of training and discrimination is long due to the complexity of the deep convolutional neural network, and real-time discrimination is difficult to perform in cooperation with the control cycle of grabbing operation of a robot. Meanwhile, different discrimination methods are based on different pressure sensors or touch arrays, and the scales, measurement modes, measuring ranges and precision of the sensors are different, so that the discrimination model obtained by the method has no universality. Therefore, the real-time accurate and efficient grabbing judgment method capable of eliminating the physical difference of the touch array has important significance for autonomous grabbing of the robot.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
Technical problem to be solved
Based on the above problems, the present disclosure provides a robot grabbing state discrimination method based on a haptic array, so as to alleviate technical problems in the prior art that the discrimination accuracy is low, the time consumed for training and discrimination is long due to the complexity of a deep convolutional neural network, real-time discrimination is difficult to perform in cooperation with a control cycle of robot grabbing operation, and an obtained discrimination model has no universality.
(II) technical scheme
The utility model provides a robot snatchs state discrimination method based on tactile array, includes:
step S1: constructing a tactile data set of a robot grabbing object;
step S2: normalizing the data set constructed in the step S1;
step S3: constructing a training model based on a multilayer perceptron;
step S4: initializing training model parameters;
step S5: performing model training and convergence based on the data set processed in the step S2 to obtain optimal parameters of the model; and
step S6: and performing actual grabbing operation to finish judging the grabbing state of the robot based on the tactile array.
In the disclosed embodiment, a tactile sensing array is mounted on a robot arm gripper of a robot.
In the disclosed embodiment, the tactile sensing array comprises 16 × 10 sensing units, each sensing unit being capable of sensing a pressure distribution exerted on an object when the robot grips the object.
In the disclosed embodiment, the force sensing range of each sensing unit is 0-5N, and the force resolution is 0.1N.
In the disclosed embodiment, in step S1, the types of objects to be grabbed include apple, baseball, water bottle, cup, pot, bottle opener, orange and tennis ball, the situation that each object is grabbed successfully is used as a positive sample, the situation that grabbing is unsuccessful or weak is used as a negative sample, each group of samples includes a gray scale image of 16 × 10 sensing units and the corresponding grabbing situation, and can be represented as [ x ] ×1,x2,...xi,x160,y]As input to the model, where xiAnd the pixel value of the gray image of the ith sensing unit is represented, the successful grabbing y is 1, and the unsuccessful or insecure grabbing y is 0.
In the embodiment of the present disclosure, in step S2, in order to improve the accuracy and convergence rate of the model, the data of the model is normalized as shown in formula (1):
wherein, x isiAs sample raw data, xi *For the normalized data, μ is the mean of the original sample data, and σ is the standard deviation of the original sample data.
In the embodiment of the present disclosure, in step S3, the multi-layered perceptron model has 4 layers, including an input layer, a 2-layer hidden layer and an output layer; the number of neurons in each layer may be defined by the vector n ═ n (n)1n2n3n4)TIs shown, wherein the layer n is input1160 neurons, hidden layer one layer n2100 neurons, hidden layer second layer n330 neurons, output layer n41 neuron.
In the embodiment of the present disclosure, in step S3, the multi-layered perceptron training model is a multi-layered fully-connected neural network, and the hidden layer and the output layer are connected to the neuron in the previous layer through a linear relationship weight, a bias and an activation function; definition ofRepresenting the weight of the jth neuron of the l-1 layer to the ith neuron of the l layer;representing the bias of the ith neuron in the l layer;represents the output of the ith neuron of the l layer; the transfer relationship between the layers can then be expressed as:
where σ (·) denotes an activation function, whose functional form is σ (x) ═ max (0, x); the output layer outputs a value between 0 and 1, which is a function of f (x) ═ 1+ e-x)TTo represent the probability of successful grabbing of the robot; the closer the output is to 1, the higher the probability of predicting the grabbing success based on the multi-layered perceptron model.
In the embodiment of the present disclosure, in step S4, to avoid the gradient extinction and gradient explosion problems that may be caused during the training process of the model, weights and biases are initialized for each layer of the model, and the weights are randomly generated and are subjected to the mean value of 0 to avoid the gradient extinction and gradient explosion problems that may be caused during the training process of the modelIs normally distributed with standard deviation, i.e.Setting upIs 0.
In the embodiment of the present disclosure, in step S5, the loss function of the model is set to loglos, which is expressed as:
where N represents the number of samples, M represents the category of the classification problem, yi,jThe ith sample is 1 when belonging to the classification j, and belongs to other classifications as 0; p is a radical ofi,jRepresenting the probability that the ith sample is predicted as the j class;
calculating loss and obtaining the deviation between the predicted value and the actual value, further solving the gradient of the loss function to the weight and the bias through back propagation, updating the weight and the bias until the loss function converges, and performing L2 regularization on the model in the training process, wherein the loss function at the moment is expressed as:
where λ is L2 regular term parameter, L is the number of model layers,is the l-th layer weight matrix.
(III) advantageous effects
According to the technical scheme, the robot grabbing state judging method based on the tactile array has at least one or part of the following beneficial effects:
(1) the robot grabbing device can be applied to robot grabbing operation for various objects, whether grabbing is successful or not is judged according to grabbing force distribution, the grabbing force of the robot is adjusted in real time, high-accuracy robot grabbing operation is achieved, and the grabbing success rate exceeds 99%;
(2) the training model is simple, the required data set has small scale, and the training result has high accuracy, and the problems of gradient disappearance, gradient explosion and overfitting in the model training process can be effectively avoided;
(3) through gray level imaging processing of the acquired data, physical differences of different touch array sensors can be effectively eliminated, so that the model can be applied to data sets acquired by different touch sensors and has certain universality.
Drawings
Fig. 1 is a schematic flowchart of a method for judging a gripping state of a robot based on a haptic array according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of an architecture of a robot grasping state determining method based on a haptic array according to an embodiment of the present disclosure.
FIG. 3 is a schematic view of a 16 × 10 haptic array of an embodiment of the disclosure.
Fig. 4 is a diagram of a multi-layered perceptron model for grab determination according to an embodiment of the disclosure.
Detailed Description
The utility model provides a robot grabbing state discrimination method based on a touch sense array, which is characterized in that in order to realize the stable grabbing operation of a robot, a touch sense sensing array is arranged on a robot paw, the touch sense force information of the robot in the grabbing process is obtained through the touch sense sensing array, and a data set aiming at different grabbing states of various objects is established; taking the robot grabbing discrimination problem as a two-classification problem taking touch information as input, and building a training model based on a multilayer sensing mechanism; model parameters are obtained through training convergence, and therefore high-precision grabbing judgment for different objects is achieved. The model has the characteristics of high training speed and accurate judgment result, and is superior to other machine learning models and deep learning models.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
In an embodiment of the present disclosure, a method for determining a robot grabbing state based on a haptic array is provided, which is shown in fig. 1 to 4, and includes:
step S1: constructing a tactile data set of a robot grabbing object;
step S2: normalizing the data set constructed in the step S1;
step S3: constructing a training model based on a multilayer perceptron;
step S4: initializing training model parameters;
step S5: performing model training and convergence based on the data set processed in the step S2 to obtain optimal parameters of the model; and
step S6: and performing actual grabbing operation to finish judging the grabbing state of the robot based on the tactile array.
In step S1, a tactile sense array is installed on the gripper fingers of the robot arm, and the tactile information of the robot during the grabbing process is obtained through the tactile sense array, and a grabbing discrimination algorithm is proposed to determine whether the object is grabbed successfully. The robot grabbing discrimination problem is used as a two-classification problem with the tactile information acquired by the tactile sensing array as input, and the two classifications are grabbing success and grabbing failure. The successful grabbing definition standard is that objects grabbed within 10s of grabbing completion time do not slide off, do not spin and keep stable.
The touch sensing array comprises 16 × 10 sensing units, the array form of the sensing units is shown in figure 3, each sensing unit can sense the pressure distribution applied to an object when the robot grabs the object, the force sensing range of each unit is 0-5N, the force resolution is 0.1N, when the robot grabs the object, the pressure distribution is output as an array from left to right and from top to bottom.
The grasping object types include: apple, baseball, water-jug, cup, jar, bottle-opener, tangerine and tennis ball 8 kinds.
Each group of samples comprises 16 × 10 gray-scale images of sensing units and corresponding grabbing conditions, and can be expressed as x1,x2,...xi,x160,y]As input to the model. One grabAnd taking 20 continuous frames of data (the sampling frequency is 20ms) in the state, and respectively recording positive and negative samples of each object in 15 grabbing states. And summarizing and de-duplicating data of different objects according to the positive and negative samples to obtain 1562 samples in total. Wherein, 1085 training set samples and 477 test set samples.
In step S2, in order to improve the accuracy and convergence speed of the model, the data of the model is normalized, and based on the z-score normalization method, as shown in formula (1), the product of the mean and the standard deviation is subtracted from the original data to obtain normalized data.
Wherein, x isiAs sample raw data, xi *For the normalized data, μ is the mean of the original sample data, and σ is the standard deviation of the original sample data.
In step S3, the multi-layered perceptron model proposed by the present disclosure has 4 layers, which includes an input layer, a hidden layer (2 layers), and an output layer. The model structure is shown in fig. 4. The number of neurons in each layer may be defined by the vector n ═ n (n)1n2n3n4)TIs shown, wherein the layer n is input1160 neurons, hidden layer one layer n2100 neurons, second layer n330 neurons, output layer n41 neuron.
The model layer is fully connected with the layer, and the hidden layer and the output layer are connected with the neurons of the previous layer through linear relation weights, biases and activation functions. Definition ofRepresents the weight of the jth neuron at level l-1 to the ith neuron at level l.Indicating the bias of the ith neuron in layer i.Representing the output of the ith neuron of the l-th layer. The transfer relationship between the layers can then be expressed as:
where σ (·) denotes an activation function, the present model uses a Re L U function as the activation function connecting the layers, with the functional form σ (x) max (0, x), since the robot grabbing discrimination is treated as a binary problem, the output layer has only one neuron, so sigmoid is used at the output layer as the activation function to output a value between 0 and 1, with the functional form f (x) e (1+ e)-x)TTo characterize the probability of robot grabbing success. The closer the output is to 1, the higher the probability of predicting the grabbing success based on the multi-layered perceptron model.
In step S4, to avoid the problems of gradient disappearance and gradient explosion that may be caused during the model training process, weights and biases are initialized for each layer of the model based on He initialization, the weights are randomly generated and obey the mean value of 0 toIs normally distributed with standard deviation, i.e.Setting upIs 0.
In step S5, the loss function of the model is set to loglos, which is expressed as:
where N represents the number of samples, M represents the category of the classification problem, yi,jThe ith sample is 1 when belonging to the classification j, and belongs to other classifications as 0; p is a radical ofi,jRepresenting the probability that the ith sample is predicted to be in the j class.
After initialization of model parameters is completed, based on input parameters and initialized weight and bias, forward propagation is conducted to calculate output of each layer, a sigmoid function is used for obtaining a predicted value y. calculation loss on an output layer, deviation between the predicted value and an actual value is obtained, then gradient of the loss function to the weight and the bias is solved through backward propagation, the weight and the bias are updated through an Adam algorithm until the loss function converges, in the training process, samples are disturbed in each iteration process, a L2 regularization term is added in the loss function, L2 regularization is conducted on the model, omega is enabled to be as small as possible when the optimization model converges, and therefore the over-fitting problem is avoided.
The loss function at this time is expressed as:
where λ is L2 regular term parameter, L is the number of model layers, where L is 4.Setting L2 a regular term parameter lambda to be 0.0001 for the ith layer weight matrix, the number of sample batches input by the optimizer to be 200, the maximum iteration step number to be 500 and the optimization tolerance to be 1e-4, setting a learning rate α to be 0.001 in the Adam algorithm, and setting an exponential decay rate of first moment estimation to be β10.9, exponential decay rate of second moment estimate β20.999, and 1e as the index of stability-8. And obtaining the optimal parameters of the model after the training is finished, wherein the prediction accuracy of the corresponding optimal model on the test set can reach 99.74 percent. And then the robot grabbing operation with high accuracy can be realized, and the grabbing success rate exceeds 99 percent.
So far, the embodiments of the present disclosure have been described in detail with reference to the accompanying drawings. It is to be noted that, in the attached drawings or in the description, the implementation modes not shown or described are all the modes known by the ordinary skilled person in the field of technology, and are not described in detail. Further, the above definitions of the various elements and methods are not limited to the various specific structures, shapes or arrangements of parts mentioned in the examples, which may be easily modified or substituted by those of ordinary skill in the art.
From the above description, those skilled in the art should clearly recognize that the present disclosure is based on a silicon nanopillar structure color imaging structure, a test system, and a preparation method.
In summary, the present disclosure provides a structural color imaging structure, a test system and a preparation method based on a silicon nano-pillar, a cylindrical periodic array is formed by electron beam lithography and electron beam evaporation deposition methods, and the precision is high; the method is compatible with the traditional semiconductor process and easy to integrate; the cylindrical nano array structure is adopted to excite the Mie resonance, and the change of the structural color caused by the change of the sample can be observed according to the change of the refractive index, so that the observation is easy, and the environment is protected; when the imaging display technology is designed, the reflection spectrum characteristics of the cylindrical structures with different array periods or different sizes are different, and researchers can manufacture different cylindrical structures according to needs, so that the measurement under the condition of different wavelengths is met.
It should also be noted that directional terms, such as "upper", "lower", "front", "rear", "left", "right", and the like, used in the embodiments are only directions referring to the drawings, and are not intended to limit the scope of the present disclosure. Throughout the drawings, like elements are represented by like or similar reference numerals. Conventional structures or constructions will be omitted when they may obscure the understanding of the present disclosure.
And the shapes and sizes of the respective components in the drawings do not reflect actual sizes and proportions, but merely illustrate the contents of the embodiments of the present disclosure. Furthermore, in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim.
Unless otherwise indicated, the numerical parameters set forth in the specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by the present disclosure. In particular, all numbers expressing quantities of ingredients, reaction conditions, and so forth used in the specification and claims are to be understood as being modified in all instances by the term "about". Generally, the expression is meant to encompass variations of ± 10% in some embodiments, 5% in some embodiments, 1% in some embodiments, 0.5% in some embodiments by the specified amount.
Furthermore, the word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements.
The use of ordinal numbers such as "first," "second," "third," etc., in the specification and claims to modify a corresponding element does not by itself connote any ordinal number of the element or any ordering of one element from another or the order of manufacture, and the use of the ordinal numbers is only used to distinguish one element having a certain name from another element having a same name.
In addition, unless steps are specifically described or must occur in sequence, the order of the steps is not limited to that listed above and may be changed or rearranged as desired by the desired design. The embodiments described above may be mixed and matched with each other or with other embodiments based on design and reliability considerations, i.e., technical features in different embodiments may be freely combined to form further embodiments.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Also in the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various disclosed aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, disclosed aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
The above-mentioned embodiments are intended to illustrate the objects, aspects and advantages of the present disclosure in further detail, and it should be understood that the above-mentioned embodiments are only illustrative of the present disclosure and are not intended to limit the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
Claims (10)
1. A robot grabbing state distinguishing method based on a tactile array comprises the following steps:
step S1: constructing a tactile data set of a robot grabbing object;
step S2: normalizing the data set constructed in the step S1;
step S3: constructing a training model based on a multilayer perceptron;
step S4: initializing training model parameters;
step S5: performing model training and convergence based on the data set processed in the step S2 to obtain optimal parameters of the model; and
step S6: and performing actual grabbing operation to finish judging the grabbing state of the robot based on the tactile array.
2. The method for judging the grabbing state of the robot based on the tactile array according to claim 1, wherein the tactile sensing array is arranged on a mechanical arm paw of the robot.
3. The robot grabbing state discrimination method based on the tactile sensing array as claimed in claim 2, wherein the tactile sensing array comprises 16 × 10 sensing units, and each sensing unit can sense the pressure distribution applied to the object when the robot grabs the object.
4. The method for judging the grabbing state of the robot based on the tactile array as claimed in claim 3, wherein the force sensing range of each sensing unit is 0-5N, and the force resolution is 0.1N.
5. The method for judging the grabbing state of the robot based on the tactile array as claimed in claim 1, wherein in step S1, the grabbed object types include apple, baseball, water bottle, cup, pot, bottle opener, orange and tennis ball, the situation that each object is grabbed successfully is used as a positive sample, the situation that the object is grabbed unsuccessfully or grabbed weakly is used as a negative sample, each group of samples comprises 16 × 10 gray scale images of sensing units and corresponding grabbing situations, and can be represented as [ x ] ×1,x2,...xi,x160,y]As input to the model, where xiAnd the pixel value of the gray image of the ith sensing unit is represented, the successful grabbing y is 1, and the unsuccessful or insecure grabbing y is 0.
6. The method for judging the grasping state of the robot based on the haptic array according to claim 1, wherein in step S2, in order to improve the accuracy and convergence rate of the model, the data of the model is normalized as shown in formula (1):
wherein, x isiAs sample raw data, xi *For the normalized data, μ is the mean of the original sample data, and σ is the standard deviation of the original sample data.
7. The method for discriminating the grabbing state of a robot based on a tactile array according to claim 1, wherein in step S3, the multi-layer perceptron model has 4 layers including an input layer, 2 hidden layers and an output layer; the number of neurons in each layer may be defined by the vector n ═ n (n)1n2n3n4)TIs shown, wherein the layer n is input1160 neurons, hidden layer one layer n2100 neurons, hidden layer second layer n330 neurons, output layer n41 neuron.
8. The method for judging the grabbing state of the robot based on the tactile array according to claim 1, wherein in step S3, the multi-layer perceptron training model is a multi-layer fully-connected neural network, and the hidden layer and the output layer are connected with the neurons in the previous layer through linear relation weights, biases and activation functions; definition ofRepresenting the weight of the jth neuron of the l-1 layer to the ith neuron of the l layer;representing the bias of the ith neuron in the l layer;represents the output of the ith neuron of the l layer; the transfer relationship between the layers can then be expressed as:
whereinσ (·) denotes an activation function, whose functional form is σ (x) ═ max (0, x); the output layer outputs a value between 0 and 1, which is a function of f (x) ═ 1+ e-x)TTo represent the probability of successful grabbing of the robot; the closer the output is to 1, the higher the probability of predicting the grabbing success based on the multi-layered perceptron model.
9. The method for judging the grabbing state of a robot based on a haptic array as claimed in claim 1, wherein in step S4, in order to avoid the problem of gradient extinction and gradient explosion that may be caused during the training process of the model, weights and biases are initialized for each layer of the model, and the weights are randomly generated and obeyed to mean 0 to obtain the average valueIs normally distributed with standard deviation, i.e.Setting upIs 0.
10. The grasping-state discrimination method of a robot based on a haptic array according to claim 1, wherein in step S5, the loss function of the model is set to loglos, which is expressed as:
where N represents the number of samples, M represents the category of the classification problem, yi,jThe ith sample is 1 when belonging to the classification j, and belongs to other classifications as 0; p is a radical ofi,jRepresenting the probability that the ith sample is predicted as the j class;
calculating loss and obtaining the deviation between the predicted value and the actual value, further solving the gradient of the loss function to the weight and the bias through back propagation, updating the weight and the bias until the loss function converges, and performing L2 regularization on the model in the training process, wherein the loss function at the moment is expressed as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010252733.XA CN111459278A (en) | 2020-04-01 | 2020-04-01 | Robot grabbing state discrimination method based on touch array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010252733.XA CN111459278A (en) | 2020-04-01 | 2020-04-01 | Robot grabbing state discrimination method based on touch array |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111459278A true CN111459278A (en) | 2020-07-28 |
Family
ID=71678992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010252733.XA Pending CN111459278A (en) | 2020-04-01 | 2020-04-01 | Robot grabbing state discrimination method based on touch array |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111459278A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114065806A (en) * | 2021-10-28 | 2022-02-18 | 贵州大学 | Manipulator touch data classification method based on impulse neural network |
WO2024087331A1 (en) * | 2022-10-24 | 2024-05-02 | 深圳先进技术研究院 | Robotic grasping prediction method based on triplet contrastive network |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011014810A1 (en) * | 2009-07-30 | 2011-02-03 | Northwestern University | Systems, methods, and apparatus for reconstruction of 3-d object morphology, position, orientation and texture using an array of tactile sensors |
CN103049792A (en) * | 2011-11-26 | 2013-04-17 | 微软公司 | Discriminative pretraining of Deep Neural Network |
CN105956351A (en) * | 2016-07-05 | 2016-09-21 | 上海航天控制技术研究所 | Touch information classified computing and modelling method based on machine learning |
CN106671112A (en) * | 2016-12-13 | 2017-05-17 | 清华大学 | Judging method of grabbing stability of mechanical arm based on touch sensation array information |
CN106960099A (en) * | 2017-03-28 | 2017-07-18 | 清华大学 | A kind of manipulator grasp stability recognition methods based on deep learning |
WO2018236753A1 (en) * | 2017-06-19 | 2018-12-27 | Google Llc | Robotic grasping prediction using neural networks and geometry aware object representation |
CN110023965A (en) * | 2016-10-10 | 2019-07-16 | 渊慧科技有限公司 | For selecting the neural network of the movement executed by intelligent robot body |
US20200086483A1 (en) * | 2018-09-15 | 2020-03-19 | X Development Llc | Action prediction networks for robotic grasping |
CN110909644A (en) * | 2019-11-14 | 2020-03-24 | 南京理工大学 | Method and system for adjusting grabbing posture of mechanical arm end effector based on reinforcement learning |
-
2020
- 2020-04-01 CN CN202010252733.XA patent/CN111459278A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011014810A1 (en) * | 2009-07-30 | 2011-02-03 | Northwestern University | Systems, methods, and apparatus for reconstruction of 3-d object morphology, position, orientation and texture using an array of tactile sensors |
CN103049792A (en) * | 2011-11-26 | 2013-04-17 | 微软公司 | Discriminative pretraining of Deep Neural Network |
CN105956351A (en) * | 2016-07-05 | 2016-09-21 | 上海航天控制技术研究所 | Touch information classified computing and modelling method based on machine learning |
CN110023965A (en) * | 2016-10-10 | 2019-07-16 | 渊慧科技有限公司 | For selecting the neural network of the movement executed by intelligent robot body |
CN106671112A (en) * | 2016-12-13 | 2017-05-17 | 清华大学 | Judging method of grabbing stability of mechanical arm based on touch sensation array information |
CN106960099A (en) * | 2017-03-28 | 2017-07-18 | 清华大学 | A kind of manipulator grasp stability recognition methods based on deep learning |
WO2018236753A1 (en) * | 2017-06-19 | 2018-12-27 | Google Llc | Robotic grasping prediction using neural networks and geometry aware object representation |
CN110691676A (en) * | 2017-06-19 | 2020-01-14 | 谷歌有限责任公司 | Robot crawling prediction using neural networks and geometrically-aware object representations |
US20200086483A1 (en) * | 2018-09-15 | 2020-03-19 | X Development Llc | Action prediction networks for robotic grasping |
CN110909644A (en) * | 2019-11-14 | 2020-03-24 | 南京理工大学 | Method and system for adjusting grabbing posture of mechanical arm end effector based on reinforcement learning |
Non-Patent Citations (2)
Title |
---|
李铁军;刘应心;刘今越;杨冬;: "基于阵列式触觉传感器的操作意图实时感知" * |
段炼: "基于FSR传感器的半掌手系统设计及算法研究" * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114065806A (en) * | 2021-10-28 | 2022-02-18 | 贵州大学 | Manipulator touch data classification method based on impulse neural network |
CN114065806B (en) * | 2021-10-28 | 2022-12-20 | 贵州大学 | Manipulator touch data classification method based on impulse neural network |
WO2024087331A1 (en) * | 2022-10-24 | 2024-05-02 | 深圳先进技术研究院 | Robotic grasping prediction method based on triplet contrastive network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111459278A (en) | Robot grabbing state discrimination method based on touch array | |
CN109657708B (en) | Workpiece recognition device and method based on image recognition-SVM learning model | |
US20040002931A1 (en) | Probability estimate for K-nearest neighbor | |
CN111144552B (en) | Multi-index grain quality prediction method and device | |
CN112001270A (en) | Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network | |
CN113537305A (en) | Image classification method based on matching network less-sample learning | |
Zhang et al. | Hardness recognition of fruits and vegetables based on tactile array information of manipulator | |
CN111915246B (en) | Parallel detection method for grain storage quantity of granary | |
Liang et al. | Novel decoupling algorithm based on parallel voltage extreme learning machine (PV-ELM) for six-axis F/M sensors | |
Scimeca et al. | Soft morphological processing of tactile stimuli for autonomous category formation | |
CN111582395A (en) | Product quality classification system based on convolutional neural network | |
CN109308316A (en) | A kind of adaptive dialog generation system based on Subject Clustering | |
Setiawan et al. | Transfer learning with multiple pre-trained network for fundus classification | |
Mohamed et al. | Optimized feed forward neural network for microscopic white blood cell images classification | |
CN101285816A (en) | Copper matte air refining procedure parameter soft sensing instrument and its soft sensing method | |
Zhu et al. | Visual-tactile sensing for real-time liquid volume estimation in grasping | |
Yun et al. | Grasping detection of dual manipulators based on Markov decision process with neural network | |
Liu et al. | Comparison of different CNN models in tuberculosis detecting | |
Jaiswal et al. | Machine Learning-Based Classification Models for Diagnosis of Diabetes | |
CN114120406B (en) | Face feature extraction and classification method based on convolutional neural network | |
CN114707399A (en) | Decoupling method of six-dimensional force sensor | |
Ghosh et al. | Combining neural network models for blood cell classification | |
Wang et al. | Cnn based chromosome classification architecture for combined dataset | |
Li et al. | Robot grasping stability prediction network based on feature-fusion and feature-reconstruction of tactile information | |
Alaba | Image classification using different machine learning techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |