CN114330460B - Object attribute identification method based on smart hand touch sense - Google Patents

Object attribute identification method based on smart hand touch sense Download PDF

Info

Publication number
CN114330460B
CN114330460B CN202210030176.6A CN202210030176A CN114330460B CN 114330460 B CN114330460 B CN 114330460B CN 202210030176 A CN202210030176 A CN 202210030176A CN 114330460 B CN114330460 B CN 114330460B
Authority
CN
China
Prior art keywords
data
touch
value
dimensional
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210030176.6A
Other languages
Chinese (zh)
Other versions
CN114330460A (en
Inventor
于国奇
张鹏
邹文凯
单东日
王晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202210030176.6A priority Critical patent/CN114330460B/en
Publication of CN114330460A publication Critical patent/CN114330460A/en
Application granted granted Critical
Publication of CN114330460B publication Critical patent/CN114330460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of intelligent robot recognition, in particular to an object attribute recognition method based on the touch of a smart hand, which comprises the following steps of S1, collecting original vibration signals and pressure signals generated by interaction between a touch sensor and an object on the smart hand, and recording, classifying and labeling the objects according to the signal difference fed back by different objects; s2, processing the original data, collecting effective signal sections to manufacture data samples, and manufacturing a data-label touch data set STD corresponding to the label; s3, training MPCNN based on the STD in S2; s4, identifying the attribute of the object by using the MPCNN. The beneficial effects of the invention are as follows: the object haptic attribute data set meets the object attribute data requirements in the field of robotic haptics. Based on STD, a convolutional neural network model with parallel multidimensional features is provided, and the model can realize the extraction of object touch data features and the identification of object attributes.

Description

Object attribute identification method based on smart hand touch sense
Technical Field
The invention relates to the technical field of intelligent robot recognition, in particular to an object attribute recognition method based on smart hand touch.
Background
With the rapid development of intelligent robot technology and the development and application of deep learning theory, many students begin to address the fields of tactile perception, attribute recognition and the like based on a deep learning method in consideration of the uniqueness of tactile information. The feedback transmitted to the touch sensor by objects with different materials and different characteristics in the interaction process is different, so that the touch-based robot can understand the difference of the objects, and the method has important significance for pushing and improving the intelligent of the robot. Therefore, in this trend of the intelligent process and the background of the large environment, the recognition of the tactile properties of the object by using the deep learning theory is a considerable problem.
Therefore, the application designs an object attribute identification method based on the touch sense of a dexterous hand.
Disclosure of Invention
The invention provides an object attribute identification method based on smart hand touch, which aims to make up the defects in the prior art.
The invention is realized by the following technical scheme:
an object attribute identification method based on smart hand touch is characterized by comprising the following steps:
s1, collecting original vibration signals and pressure signals generated by interaction of a touch sensor and an object on a smart hand, and recording, classifying and labeling the object according to signal differences fed back by different objects;
s2, processing the original data, collecting effective signal section manufacturing data samples, corresponding to the labels, and manufacturing a data-label touch data set STD;
s3, training a multi-dimensional feature parallel convolutional neural network model MPCNN based on the data-label tactile data set in S2;
and S4, identifying the attribute of the object by using a convolutional neural network model MPCNN with multi-dimensional characteristics connected in parallel.
Further, in order to better realize the invention, the multi-dimensional feature parallel convolutional neural network model MPCNN is divided into four modules, including three feature extraction modules Block1, block2, block3 and a classification module Fully connected; the method comprises the steps that a Block module of a feature extraction module is used for parallelly splicing input original feature data, convolved low-dimensional feature data and high-dimensional feature data to obtain multi-level high-dimensional feature data, then the obtained high-dimensional feature data is subjected to dimension reduction to extract significant feature data, finally the nonlinear expression capacity of the Block module is improved by using an activation function, after the three Block modules pass, a classification module Fully connected is used for flattening the high-dimensional data features into one-dimensional data, the dimension of the one-dimensional data is gradually reduced to be the same as the dimension of a sample tag space, and the one-dimensional data is compared with a real tag.
Further, in order to better implement the present invention, the S3 is specifically,
s31, before training a multi-dimensional feature parallel convolutional neural network model MPCNN, initializing each layer of neural network parameter in each module of the multi-dimensional feature parallel convolutional neural network model MPCNN by using Gaussian distribution with a mean value of 0 and a variance of 1;
s32, inputting a data set STD to train a weight parameter W and a bias parameter b of a neuron in the neural network and a convolution layer coefficient k until the difference between a predicted value and a true value is minimized, so that the MPCNN has the optimal capability of identifying the tactile data characteristics of the object, and further the object attribute is identified.
Further, in order to better realize the invention, the dexterous hand is a Kinova mechanical arm dexterous hand, and the sensor on the dexterous hand is a Synthouch touch sensor.
Further, in order to better implement the present invention, the specific method for making the data-tag tactile data set STD in S2 is as follows:
s21, releasing the touch data generated by the interaction of the touch sensors and the object through the Syntuch Node, wherein in the execution process of a complete exploration action for generating the touch data, the two touch sensors follow the double-finger grip to physically interact with the object, continuously acquiring the touch sensor data and releasing the touch sensor data to an ROS network, wherein the release frequency is 100Hz;
s22, continuously judging whether the touch sensor is in contact with the target object, namely the vibration signal Value at the later moment t+1 And the Value of the vibration signal at the previous moment t The ratio is greater than 1.1 or less than 0.9;
s23, subscribing the touch data issued by the Syntuch Node, and once contact is generated, superposing the data generated by the touch sensor issued by the Syntuch Node on a double-channel touch sample;
s24, the tactile data sample and the label together form an STD tactile data set.
The beneficial effects of the invention are as follows:
the invention proposes an object haptic attribute data set (Syntouch Tactile Datasets, STD) that meets the object attribute data requirements in the field of robot haptic, taking into account the absence of object attribute data sets in the field of haptic data.
Based on STD, the invention provides a Multi-dimensional characteristic parallel convolutional neural network model (Multi-dimensional ParallelConvolutional Neural Network, MPCNN) which can realize the extraction of object touch data characteristics and the identification of object attributes.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a flow chart of the collection of a robotic haptic object data set STD of the present invention;
fig. 3 is a diagram of an MPCNN network structure model according to the present invention;
FIG. 4 is a graph of loss reduction in training of the present invention;
FIG. 5 is a graph of the elastic rate of increase during training of the present invention;
FIG. 6 is a graph of the increase in stiffness accuracy during training of the present invention;
FIG. 7 is a graph of loss reduction at test of the present invention;
FIG. 8 is a graph of the increase in elastic accuracy during testing in accordance with the present invention;
FIG. 9 is a graph showing the increase in hardness accuracy during the test of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, directions or positional relationships indicated by terms such as "middle", "upper", "lower", "horizontal", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or those that are conventionally put in place when the inventive product is used, are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific direction, be configured and operated in a specific direction, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Furthermore, the terms "horizontal," "vertical," and the like do not denote a requirement that the component be absolutely horizontal or overhang, but rather may be slightly inclined. As "horizontal" merely means that its direction is more horizontal than "vertical", and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present invention, it should be noted that, unless explicitly stated and limited otherwise, the terms "disposed," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected. Either mechanically or electrically. Can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Fig. 1 to 9 are specific flow charts of an embodiment of the present invention, which are as follows:
1. introduction of hardware equipment:
this example uses a Kinova robotic arm dexterous hand, a Synthouch tactile sensor, a server Ubuntu system, a robotic operating system (Robot Operating System, ROS).
2. Object tactile data set (Syntouch Tactile Datasets, STD)
As the Kinova mechanical arm is used as an advanced open source mechanical arm, the ROS can be used for modifying the bottom operation code, so that the Synthouch touch sensor is contacted with objects of various materials by operating the Kinova mechanical arm, and object touch attribute data are collected. All selected contact objects are respectively: mineral water bottles, trash bags, glue, bubble paper, tooth mug boxes, plastic boxes, metal posts, pop cans, tennis balls, rubber, earphone boxes, soap boxes, hand cream, glass bottles, foam boards, foam balls, double-layer foam, black sponge, square sponge, round sponge, jelly, ham sausages, oranges, medium bag tissue, triangular bandages, white cles, masks, black bandages, towels, books, paper cups, and small bag tissue. Based on the difference between the elasticity and hardness of the objects, an elasticity and hardness grade label for each object is created, respectively combined with the object composition data-the tactile data set STD of the label. Fig. 2 is a process of manipulating a robot for haptic data collection.
As shown in fig. 2, a specific collection procedure is as follows:
1) ROS Master is the core of the overall ROS operation, registering the names of registered nodes, services and topics, and maintaining a parameter server.
2) Kinova Node: the Kinova robot arm is maneuvered and a reasonable path is planned to move the robot arm tip to a fixed point. And then the clamping jaw is operated to open and close the clamping object, the clamping jaw is opened first, then the clamping object is closed slowly, after the clamping object is clamped, the clamping jaw is closed continuously, and after the fixed force threshold is reached, the clamping jaw stops closing.
3) Syntouch Node: the touch sensor is used for issuing touch data generated by interaction of the touch sensor and the object, and in the process of executing a complete exploring action for generating the touch data, the two touch sensors follow the double-finger grip to physically interact with the object, continuously acquire the touch sensor data and issue the touch sensor data to the ROS network, wherein the issuing frequency is 100Hz.
4) Judge, continuously judging that the touch sensor is contacted with target object or not, i.e. vibration signal Value at next moment t+1 And the Value of the vibration signal at the previous moment t The ratio is greater than 1.1 or less than 0.9.
5) And subscribing the touch data issued by the Syntuch Node, and superposing the data generated by the touch sensor issued by the Syntuch Node on a double-channel touch sample once contact is generated. Because the Kinova mechanical arm needs to reach a preset position before performing the exploring action, the tactile sensor returns useless data in the process, in order to solve the problem, a double-thread is established, firstly, the tactile data is continuously received in the main thread, whether the tactile sensor makes initial contact with an object is judged, after the initial contact is made, a thread separation process is started, firstly, the thread separation process waits for 3 seconds until all the sample data in the main thread are received, and then object data are intercepted in the time from the beginning to the moment to make a tactile sample.
6) The tactile data samples together with the label form an STD tactile data set.
3. A network structure model of a convolutional neural network model (Multi-dimensional ParallelConvolutional Neural Network, MPCNN) with parallel multidimensional features is shown in fig. 3, wherein,
convolition, a Convolution layer, which carries out Convolution operation on input data and maps the input data to a hidden layer feature space. The specific parameters filter, stride, and padding are shown.
Activity: and activating a function, namely mapping the data characteristics from a linear relation to a nonlinear relation, and increasing the nonlinear expression capacity of a model, wherein the function is activated by using a leak relu in the model.
Fully connected: and 2 layers of full connection layers, mapping the obtained data characteristic representation to a sample label space.
Loss function Loss-the MultiLabelSoftMarginLoss function is used as a criterion for the difference between the predicted and the actual values.
Figure BDA0003466119490000061
In the course of the spatial fusion of the hidden layer special evidence, use +.>
Figure BDA0003466119490000062
To splice different feature layers, in the second dimension of the high-dimensional implicit feature, the channel dimension.
The dimensional change and specific operation process of the model are as follows:
firstly, the dimension meaning of data is (sample number, channel number, feature number and feature length), the quantitative value is (1600,2, 23, 300), the quantitative dimension is (64, 8, 11, 150) after the Block1 module feature extraction and processing, the quantitative dimension is (64, 24,5, 75) after the Block2 module feature extraction and processing, the quantitative dimension is (64, 57,2, 37) after the Block3 module feature extraction and processing, and then the dimension is changed from the second dimension to the dimension (64, 4218) in the process of being flattened to the dimension (64, 4218) which is the same as the label space from the dimension (64, 20) through the process of (64, 4218), wherein 64 means that 64 samples are randomly selected for each batch when a small batch random gradient descent method is used, and 20 means that each sample outputs 2 features of elasticity and hardness, and each feature has 10 grades.
And (3) calculating specific characteristic values:
during forward propagation, the specific calculation formula is as follows:
(1)
Figure BDA0003466119490000063
(2)
Figure BDA0003466119490000064
in (2), X i+1 Is the characteristic tensor value of the implicit space output by the (i+1) th Block module, and sigma is the sigmoid activation functionThe number, maxpooling, is the maximum pooling layer dimension reduction operation, and the parameter is 2, k 1 Is the weight parameter of the convolution low-dimensional feature layer, W i 1 Is the neuron weight parameter value contained in the convolved low-dimensional feature layer in the ith Block module,
Figure BDA0003466119490000065
is the neuron bias parameter value contained in the convolved low-dimensional feature layer in the ith Block module. k (k) 2 Is the weight parameter of the convolution high-dimensional feature layer, W i 2 Is the neuron weight parameter value contained in the convolution high-dimensional feature layer in the ith Block module,/->
Figure BDA0003466119490000071
Is the neuron bias parameter value contained in the convolved high-dimensional feature layer in the ith Block module. Use->
Figure BDA0003466119490000072
To splice different feature layers, in the second dimension of the high-dimensional implicit feature, the channel dimension.
(3)
Figure BDA0003466119490000073
Wherein Y is 0 Is X in (2) 3
In (3), Y j+1 Is the output characteristic tensor value, W, of the j+1th layer full connection layer in the full connected module j+1 ,b j+1 The weight parameter and the bias parameter of the j+1th layer full-connection layer neuron in the Fully connected module are respectively, Y 2 Is the final output value of the model.
(4)
Figure BDA0003466119490000074
In (4), y n Is the true tag value, x n Is a network predictor, i.e., Y in (3) 2
(5)
Figure BDA0003466119490000075
Figure BDA0003466119490000076
In (5), y n Is the true tag value of n samples, x n Is a network predictor, σ is a sigmoid activation function, ln is a logarithmic function.
During counter propagation, the partial derivatives of the loss function pairs W, b and k are respectively calculated by a gradient descent method according to a chained derivative rule
Figure BDA0003466119490000077
The neural network parameters W, b, k are continuously optimized by using an Adam optimizer with a learning rate of 0.001, and the best fitting capability is obtained.
4. Results
We obtained a graph of the loss of recognition drop and a graph of the rise in accuracy for the elasticity and stiffness of all objects during the model training and testing. See fig. 4-9.
As can be seen from fig. 3, the difference between the predicted value and the actual value drops sharply during the training of 150 rounds, the loss drop rate becomes smaller after about 40 rounds, and then drops slowly to about 0.03 after about 80 rounds and is maintained.
As can be seen from fig. 4, in the process of identifying the elasticity of the object, the accuracy of the model is always lower due to the larger difference between the predicted value and the actual value around the front 40 rounds, an inflection point of the identification accuracy is reached around 40 rounds, the accuracy rises at a larger rate until the accuracy is stable after around 80 rounds, and the accuracy is maintained above 90%.
As can be seen from fig. 5, in the process of identifying the hardness of the object, the accuracy of the model is always lower due to the larger difference between the predicted value and the actual value around the front 40 rounds, an inflection point of the identification accuracy is reached around 40 rounds, the accuracy is increased at a larger rate, the accuracy is stable after 80 rounds, and the accuracy is maintained to be more than 90%.
As can be seen from fig. 6, the loss value between the predicted value and the true value drops drastically during 150 rounds of testing, the loss drop rate becomes small after about 20 rounds, and then drops slowly after about 40 rounds to a loss value of about 0.04 and remains stable.
As can be seen from fig. 7, in the process of identifying the elasticity of the object, the accuracy of the model is always lower due to the larger difference between the predicted value and the actual value around the front 35 rounds, an inflection point of the identification accuracy is reached around 35 rounds, the accuracy rises at a larger rate until the accuracy is stable after around 90 rounds, and the accuracy is maintained above 90%.
As seen from fig. 8, in the process of identifying the hardness of the object, the accuracy of the model is always lower due to the larger difference between the predicted value and the actual value around the front 45 rounds, an inflection point of the identification accuracy is reached around 45 rounds, the accuracy rises at a larger rate until the accuracy is stable after around 80 rounds, and the accuracy is 100%.
In summary, the data set STD provided in this embodiment includes abundant tactile features, and can be applied to the proposed neural network model MPCNN to effectively identify the elasticity and hardness in the tactile properties of the object.
Finally, it is noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention, and that other modifications and equivalents thereof by those skilled in the art should be included in the scope of the claims of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (4)

1. An object attribute identification method based on smart hand touch is characterized by comprising the following steps:
s1, collecting original vibration signals and pressure signals generated by interaction of a touch sensor and an object on a smart hand, and recording, classifying and labeling the object according to signal differences fed back by different objects;
s2, processing the original data, collecting effective signal sections to manufacture data samples, and manufacturing a data-label touch data set STD corresponding to the label;
s3, training a multi-dimensional feature parallel convolutional neural network model MPCNN based on the data-label tactile data set in S2;
s4, identifying the attribute of the object by using a convolutional neural network model MPCNN with multi-dimensional characteristics connected in parallel; the multi-dimensional feature parallel convolutional neural network model MPCNN is divided into four modules, including three feature extraction modules Block1, block2, block3 and a classification module Fully connected; the method comprises the steps that a Block module of a feature extraction module is used for parallelly splicing input original feature data, convolved low-dimensional feature data and high-dimensional feature data to obtain multi-level high-dimensional feature data, then the obtained high-dimensional feature data is subjected to dimension reduction to extract significant feature data, finally the nonlinear expression capacity of the Block module is improved by using an activation function, after the three Block modules pass, a classification module Fully connected is used for flattening the high-dimensional data features into one-dimensional data, the dimension of the one-dimensional data is gradually reduced to be the same as the dimension of a sample tag space, and the one-dimensional data is compared with a real tag;
the feature extraction module extracts the characteristics of elasticity and hardness, wherein the characteristics comprise 2 characteristics, and specific characteristic values are calculated as follows:
Figure FDA0004192446880000011
Figure FDA0004192446880000012
wherein X is i+1 Is the characteristic tensor value of the hidden space output by the (i+1) th Block module, sigma is a sigmoid activation function, maxpooling is the maximum pooling layer dimension reduction operation, and the parameters are 2, k 1 Is the weight parameter of the convolution low-dimensional feature layer, W i 1 Is the neuron weight parameter value contained in the convolved low-dimensional feature layer in the ith Block module,
Figure FDA0004192446880000013
is the convolution low-dimension feature layer in the ith Block moduleThe neuron bias parameter value, k contained in the matrix 2 Is the weight parameter of the convolution high-dimensional feature layer, W i 2 Is the neuron weight parameter value contained in the convolution high-dimensional feature layer in the ith Block module,/->
Figure FDA0004192446880000021
Is the neuron bias parameter value contained in the convolution high-dimensional feature layer in the ith Block module, using +.>
Figure FDA0004192446880000022
Splicing different feature layers, and splicing on the second dimension of the high-dimensional implicit features, namely the channel dimension; />
Figure FDA0004192446880000023
Wherein Y is 0 Is X in b 3
Wherein Y is j+1 Is the output characteristic tensor value, W, of the j+1th layer full connection layer in the full connected module j+1 ,b j+1 The weight parameter and the bias parameter of the j+1th layer full-connection layer neuron in the Fully connected module are respectively, Y 2 Is the final output value of the model;
Figure FDA0004192446880000024
wherein y is n Is the true tag value, x n Is the network predictor, i.e. Y in c 2
Figure FDA0004192446880000025
(x n )+(1-y n )×ln(1-σ(x n )))
Wherein y is n Is the true tag value of n samples, x n Is a network predicted value, sigma is a sigmoid activation function, ln is a logarithmic function;
f, in the back propagation, the partial derivatives of the loss function pairs W, b and k are respectively obtained by a gradient descent method according to a chain type derivative rule
Figure FDA0004192446880000026
The neural network parameters W, b, k are continuously optimized by using an Adam optimizer with a learning rate of 0.001, and the best fitting capability is obtained.
2. The smart hand tactile object attribute identification method of claim 1 wherein:
the step S3 is specifically that,
s31, before training a multi-dimensional feature parallel convolutional neural network model MPCNN, initializing each layer of neural network parameter in each module of the multi-dimensional feature parallel convolutional neural network model MPCNN by using Gaussian distribution with a mean value of 0 and a variance of 1;
s32, inputting a data set STD to train a weight parameter W and a bias parameter b of a neuron in the neural network and a convolution layer coefficient k until the difference between a predicted value and a true value is minimized, so that the MPCNN has the optimal capability of identifying the tactile data characteristics of the object, and further the object attribute is identified.
3. The smart hand tactile object attribute identification method of claim 1 wherein:
the dexterous hand is a Kinova mechanical arm dexterous hand, and the sensor on the dexterous hand is a Synthouch touch sensor.
4. The smart hand tactile object attribute identification method of claim 1 wherein:
the specific method for making the data-tag tactile data set STD in S2 is as follows:
s21, releasing the touch data generated by the interaction of the touch sensors and the object through the SyntuchNode, wherein in the execution process of a complete exploration action for generating the touch data, the two touch sensors follow the double-finger grip to physically interact with the object, continuously acquiring the touch sensor data and releasing the touch sensor data to an ROS network, wherein the release frequency is 100Hz;
s22, continuously judging whether the touch sensor is in contact with the target object, namely the vibration signal Value at the later moment t+1 And the Value of the vibration signal at the previous moment t The ratio is greater than 1.1 or less than 0.9;
s23, subscribing the touch data issued by the Syntuch node, and once contact is generated, superposing the data generated by the touch sensor issued by the Syntuch node on a double-channel touch sample;
s24, the tactile data sample and the label together form an STD tactile data set.
CN202210030176.6A 2022-01-12 2022-01-12 Object attribute identification method based on smart hand touch sense Active CN114330460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210030176.6A CN114330460B (en) 2022-01-12 2022-01-12 Object attribute identification method based on smart hand touch sense

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210030176.6A CN114330460B (en) 2022-01-12 2022-01-12 Object attribute identification method based on smart hand touch sense

Publications (2)

Publication Number Publication Date
CN114330460A CN114330460A (en) 2022-04-12
CN114330460B true CN114330460B (en) 2023-05-30

Family

ID=81027585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210030176.6A Active CN114330460B (en) 2022-01-12 2022-01-12 Object attribute identification method based on smart hand touch sense

Country Status (1)

Country Link
CN (1) CN114330460B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476219A (en) * 2020-06-02 2020-07-31 苏州科技大学 Image target detection method in intelligent home environment
CN112529062B (en) * 2020-12-04 2021-06-15 齐鲁工业大学 Object classification method based on dexterous hand touch information
CN112766349B (en) * 2021-01-12 2021-08-24 齐鲁工业大学 Object description generation method based on machine vision and tactile perception

Also Published As

Publication number Publication date
CN114330460A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
Aksoy et al. Categorizing object-action relations from semantic scene graphs
Gao et al. Deep learning for tactile understanding from visual and haptic data
CN103793718B (en) Deep study-based facial expression recognition method
CN107463952B (en) Object material classification method based on multi-mode fusion deep learning
CN110046671A (en) A kind of file classification method based on capsule network
CN110083700A (en) A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks
CN111581395A (en) Model fusion triple representation learning system and method based on deep learning
CN102938070B (en) A kind of behavior recognition methods based on action subspace and weight behavior model of cognition
Sinapov et al. Interactive learning of the acoustic properties of household objects
Luo et al. Knock-knock: acoustic object recognition by using stacked denoising autoencoders
Alameh et al. DCNN for tactile sensory data classification based on transfer learning
Zhang et al. A triangle histogram for object classification by tactile sensing
CN115965217A (en) Intelligent production monitoring method and system for plastic model
Navarro-Guerrero et al. Visuo-haptic object perception for robots: an overview
CN114999006A (en) Multi-modal emotion analysis method, device and equipment based on uncertainty estimation
CN107967441B (en) Video behavior identification method based on two-channel 3D-2D RBM model
Wilson et al. Analyzing liquid pouring sequences via audio-visual neural networks
Tatiya et al. Transferring implicit knowledge of non-visual object properties across heterogeneous robot morphologies
CN114330460B (en) Object attribute identification method based on smart hand touch sense
CN110111365B (en) Training method and device based on deep learning and target tracking method and device
Bottcher et al. Object recognition for robotics from tactile time series data utilising different neural network architectures
CN111985532A (en) Scene-level context-aware emotion recognition deep network method
CN110378229B (en) Electronic nose data feature selection method based on filter-wrapper frame
Bhattacharjee et al. Inferring object properties from incidental contact with a tactile-sensing forearm
CN113223018A (en) Fine-grained image analysis processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant