CN114898219A - SVM-based manipulator touch data representation and identification method - Google Patents

SVM-based manipulator touch data representation and identification method Download PDF

Info

Publication number
CN114898219A
CN114898219A CN202210817681.5A CN202210817681A CN114898219A CN 114898219 A CN114898219 A CN 114898219A CN 202210817681 A CN202210817681 A CN 202210817681A CN 114898219 A CN114898219 A CN 114898219A
Authority
CN
China
Prior art keywords
layer
data
attention
svm
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210817681.5A
Other languages
Chinese (zh)
Other versions
CN114898219B (en
Inventor
冯蕾
杨景娜
禄雨薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China National Institute of Standardization
Original Assignee
China National Institute of Standardization
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China National Institute of Standardization filed Critical China National Institute of Standardization
Priority to CN202210817681.5A priority Critical patent/CN114898219B/en
Publication of CN114898219A publication Critical patent/CN114898219A/en
Application granted granted Critical
Publication of CN114898219B publication Critical patent/CN114898219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a manipulator touch data representation and identification method based on an SVM (support vector machine), which comprises a touch data input layer, a touch data coding layer, a touch data output end and a touch data support vector network; the tactile data coding layer comprises a multi-head coding self-attention layer and a feedforward neural network layer; the invention belongs to the field of computers, and has the advantages that: the pressure data (tactile data) is used for identifying the object, so that the problem that the traditional object classification seriously depends on a computer vision method can be effectively solved; the traditional method for solving the problem of haptic classification by using the BP neural network needs a large number of samples for support in the training stage, and when the dimensionality of haptic data is high, the model is complex and the calculated amount is large, so that the accurate prediction of different mode classes can be completed by using a small number of samples; because the traditional method for solving the problem of haptic classification by using the BP neural network is easy to generate overfitting, the support vector machine can find the optimal decision boundary, and the overfitting is effectively avoided.

Description

SVM-based manipulator touch data representation and identification method
Technical Field
The invention relates to the field of computers, in particular to a manipulator touch data representation and identification method based on an SVM (support vector machine).
Background
A Support Vector Machine (SVM) can process the problems of linear two-classification and multi-classification tasks, nonlinear two-classification and multi-classification tasks, regression of common continuous variables, regression of probability continuous variables, Support Vector clustering, abnormal value detection and the like; the method can be widely applied to pattern recognition tasks such as handwriting recognition of numbers, face recognition, text and hypertext classification, image recognition and image segmentation, and the SVM is also widely applied to protein classification.
The development of computer vision and artificial intelligence makes the computer perform extraordinary tasks such as identifying, tracking and measuring a target by using a camera and a computer to replace human eyes, but under the conditions of weak light, exposure, narrow space and weak computing power, classification processing of objects by using vision is not satisfactory; haptic sensation is one of the irreplaceable sources of information for humans when exploring the surrounding environment, which helps humans perceive the surrounding environment by transmitting various sensory information (e.g., smoothness, pressure values, temperature, vibration sensation, etc.) to the central nerve; in a conventional workspace, the types of objects that the robot needs to grasp are limited, and thus the tactile data generated by the robot sensors is limited, which provides for proper pattern classification.
Disclosure of Invention
Technical problem to be solved
In order to solve the problems in the prior art, the invention provides a manipulator touch data representation and identification method based on an SVM (support vector machine), which aims to solve the following problems:
(1) the problem that the traditional object classification depends on a computer vision method seriously is solved;
(2) in the traditional computer vision scheme, after the object is correctly identified, if the object needs to be further operated, additional positioning and mechanical arm planning are required;
(3) the traditional method for solving the haptic classification problem by using the BP neural network needs a large number of sample supports in the training stage, and when the dimensionality of haptic data is high, the model is complicated and the calculation amount is increased;
(4) traditional solutions to the haptic classification problem using BP neural networks tend to produce overfitting.
(II) technical scheme
Aiming at the technical problem to be solved by the invention, the invention provides a manipulator tactile data representation and identification method based on an SVM (support vector machine), which comprises a tactile data input layer, a tactile data coding layer, a tactile data output end and a tactile data support vector network; the tactile data coding layer comprises a multi-head coding self-attention layer and a feedforward neural network layer;
(1) the tactile data TS processing method comprises the steps that a tactile data input layer is used for carrying out normalization operation on pressure data obtained by a pressure sensor arranged on a manipulator to obtain tactile data TS, the dimensionality of the tactile data TS is 4 multiplied by 24, position coding is carried out on the tactile data TS to be embedded to obtain tactile data TS2, and the reason for carrying out position coding is that the processing mode of the tactile data TS is one-time input centralized processing, so that the time sequence relation among data is ignored, and therefore the tactile data TS needs to be subjected to position coding to describe the sequential position relation among all components of the tactile data TS;
(2) the multi-head coding self-attention layer calculates the correlation of the haptic data TS2 of different time sequences input into the multi-head coding self-attention layer, and the specific operation steps are as follows:
s1, generating a characteristic matrix W with the value range of each component between-1 and 1 q 、W k And W v The feature matrix W q 、W k And W v Set as non-modifiable, feature matrix W q 、W k And W v All the dimensions of (A) are 24 multiplied by 8;
s2 passing through feature matrix W q 、W k And W v Generating a search matrix Q, a key matrix K and a value matrix V;
s3, calculating Attention Score Attention-Score, wherein the specific calculation formula is as follows:
Figure 507575DEST_PATH_IMAGE001
in the above formula, d k The dimension of the Attention Score Attention-Score is 4 × 8 as a scaling factor;
s4, introducing a Multi-head mechanism to calculate Multi-head Attention Score Multi-Self-Attention, repeatedly executing S1, S2 and S3 to generate 3 Attention Score-scores, and splicing the three Attention scores according to columns to obtain Multi-head Attention Score Multi-Self-Attention, wherein the dimensionality of the Multi-head Attention Score Multi-Self-Attention is 4 multiplied by 24;
s5, carrying Out residual error addition operation to obtain the output Out of the multi-head coding self-attention layer, wherein the specific calculation formula is as follows:
Figure 196830DEST_PATH_IMAGE002
in the above equation, Out represents the output of multi-head coding from the attention Layer, Layer _ Norm represents Layer normalization, and the dimension of Out is 4 × 24, which is the same as that of the haptic data TS2, and each line is represented by C1, C2, C3, and C4, which are all 1 × 24.
(3) The feedforward neural network layer comprises 4 BP neural networks, each BP neural network comprises a first middle hidden layer and a second middle hidden layer, each first middle hidden layer and each second middle hidden layer respectively comprise 24 neurons, the input of each BP neural network is respectively C1, C2, C3 and C4, and the calculation step is as follows:
inputting C1, C2, C3 and C4 into corresponding BP neural networks respectively to be calculatedF 1F 2F 3F 4
Figure 548177DEST_PATH_IMAGE003
In the above formula, the first and second carbon atoms are,b 1 indicating the offset in the middle of the first layer,b 2 indicating the bias in the middle of the second layer,W 1 is the inner star weight vector of the first intermediate hidden layer, W 2 Inner star right of middle hidden layer of second layerThe vector of the vector is then calculated,F i for the output of each BP neural network, specificallyF 1F 2F 3F 4
(4) The haptic data support vector network is a strong machine learning method, is used for carrying out final classification on output data of a haptic data coding layer, and comprises the following specific calculation steps:
s1, setting training set data to have M mode classes, and constructing a bank _ SVM between every two classes by using a one-against-one method in the SVM, so that M bank _ SVM needs to be constructed in total and is used for the ith mode class and the jth mode class, and the calculation mode for constructing the bank _ SVM is as follows:
Figure 369503DEST_PATH_IMAGE004
in the above formula, superscripts i and j represent parameters between the i-th class and the j-th class, subscript t represents indexes of the i-th class and the j-th class samples, psi represents nonlinear mapping from an input space to a feature space, the parameter to be solved is equivalent to solving a dual problem of the above formula, and a decision function expression used for judging the parameters between the i-th class and the j-th class after the solution is:
Figure 780892DEST_PATH_IMAGE005
in the above formula, x new For haptic data TS2 for classification, for x new Whether it belongs to class i or j;
s2, classifying the new haptic data TS2 to be classified by adopting a voting strategy, wherein each binary _ SVM corresponds to a prediction voting result for the new haptic data TS2 to be classified according to a decision function, taking the binary _ SVM between the i class and the j class as an example, if x is classified, the classification is carried out on the haptic data TS2 to be classified new If the prediction is i type, the ticket number of i type is added with 1; the final class with the highest number of votes is the final predicted final wins for the new haptic data TS2 to be classified.
(III) advantageous effects
(1) The pressure data (tactile data) is used for identifying the object, so that the problem that the traditional object classification seriously depends on a computer vision method can be effectively solved;
(2) in the traditional computer vision scheme, after the object is correctly identified, if the object needs to be further operated, additional positioning and mechanical arm planning problems need to be carried out, and the scheme based on the tactile data can realize 'touch and get';
(3) the traditional method for solving the problem of haptic classification by using a BP neural network needs a large number of sample supports in a training stage, and when the dimensionality of haptic data is very high, a model is complex and the calculated amount is increased;
(4) because the traditional method for solving the problem of haptic classification by using the BP neural network is easy to generate overfitting, a Support Vector Machine (SVM) can find the optimal decision boundary, and the overfitting is effectively avoided.
Drawings
FIG. 1 is a flow chart of a SVM-based manipulator haptic data representation recognition method in accordance with the present invention;
FIG. 2 is a data processing flow diagram of a haptic data encoding layer proposed by the present invention;
fig. 3 is a schematic diagram of the installation of the pressure sensor of the manipulator corresponding to the SVM-based manipulator tactile data representation and identification method.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments; all other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present disclosure without any creative effort belong to the protection scope of the present disclosure.
A manipulator touch data representation and identification method based on SVM comprises a touch data input layer, a touch data coding layer, a touch data output end and a touch data support vector network; the tactile data coding layer comprises a multi-head coding self-attention layer and a feedforward neural network layer;
(1) the tactile data input layer is used for performing normalization operation on pressure data obtained by a pressure sensor arranged on a manipulator to obtain tactile data TS, the dimensionality of the tactile data TS is 4 multiplied by 24, and the tactile data TS is subjected to position coding and embedded to obtain tactile data TS2, the reason for performing position coding is that the processing mode of the tactile data TS is one-time input centralized processing, so that the time sequence relation among data is ignored, so that the tactile data TS needs to be subjected to position coding to describe the sequential position relation among all components of the tactile data TS, and the specific calculation formula of the position coding is as follows:
Figure 936936DEST_PATH_IMAGE006
in the above formula, d is the dimension of the haptic data TS2, p i Representing the position-encoded component at position i.
(2) The multi-head coding self-attention layer calculates the correlation of the haptic data TS2 of different time sequences input into the multi-head coding self-attention layer, and the specific operation steps are as follows:
s1, generating a characteristic matrix W with the value range of each component between-1 and 1 q 、W k And W v The feature matrix W q 、W k And W v Set as non-modifiable, feature matrix W q 、W k And W v All the dimensions of (A) are 24 multiplied by 8;
s2 passing through feature matrix W q 、W k And W v Generating a search matrix Q, a key matrix K and a value matrix V, wherein the specific calculation formula is as follows:
Figure 826395DEST_PATH_IMAGE007
the dimensionality of the search matrix Q, the key matrix K and the value matrix V obtained by calculation in the formula is 4 multiplied by 8;
s3, calculating Attention Score Attention-Score, wherein the specific calculation formula is as follows:
Figure 767806DEST_PATH_IMAGE001
in the above formula, d k The dimension of the Attention Score Attention-Score is 4 × 8 as a scaling factor;
s4, introducing a Multi-head mechanism to calculate Multi-head Attention Score Multi-Self-Attention, repeatedly executing S1, S2 and S3 to generate 3 Attention Score-Score, and splicing the three Attention scores according to columns to obtain the Multi-head Attention Score Multi-Self-Attention, wherein the dimensionality of the Multi-head Attention Score Multi-Self-Attention is 4 multiplied by 24;
s5, performing residual addition operation to obtain the output Out of the multi-head coding self-attention layer, wherein the specific calculation formula is as follows:
Figure 350097DEST_PATH_IMAGE002
in the above equation, Out represents the output of a multi-headed code from the attention Layer, Layer _ Norm represents the Layer normalization, and the dimension of Out is 4 × 24, which is the same as the dimension of the haptic data TS2, and each line is represented by C1, C2, C3, and C4, which are each 1 × 24 in dimension.
(3) The feedforward neural network layer comprises 4 BP neural networks, each BP neural network comprises a first middle hidden layer and a second middle hidden layer, each first middle hidden layer and each second middle hidden layer respectively comprise 24 neurons, the input of each BP neural network is respectively C1, C2, C3 and C4, and the calculation step is as follows:
inputting C1, C2, C3 and C4 into corresponding BP neural networks respectively to be calculatedF 1F 2F 3F 4
Figure 993437DEST_PATH_IMAGE003
In the above formula, the first and second carbon atoms are,b 1 indicating the offset in the middle of the first layer,b 2 indicating the bias in the middle of the second layer,W 1 is the inner star weight vector of the first intermediate hidden layer, W 2 Is the inner star weight vector of the middle hidden layer of the second layer,F i for the output of each BP neural network, specificallyF 1F 2F 3F 4
(4) The haptic data support vector network is a strong machine learning method, is used for carrying out final classification on output data of a haptic data coding layer, and comprises the following specific calculation steps:
s1, setting training set data to have M mode classes, and constructing a bank _ SVM between every two classes by using a one-against-one method in the SVM, so that M bank _ SVM needs to be constructed in total, and for the ith mode class and the jth mode class, the calculation mode for constructing the bank _ SVM is as follows:
Figure 686586DEST_PATH_IMAGE004
in the above formula, superscripts i and j represent parameters between the i-th class and the j-th class, subscript t represents indexes of the i-th class and the j-th class samples, ψ represents nonlinear mapping from an input space to a feature space, a parameter to be solved for the above formula is equivalent to solving a dual problem of the above formula, and a decision function expression for judging between the i-th class and the j-th class after the solution is:
Figure 216925DEST_PATH_IMAGE005
in the above formula, x new For haptic data TS2 for classification, for x new Whether it belongs to class i or j;
s2, classifying the new haptic data TS2 to be classified by adopting a voting strategy, and enabling each binary _ SVM to be classified according to the decisionThe function corresponds to a prediction voting result for the new haptic data TS2 to be classified, taking the binary _ SVM between class i and j as an example, if it corresponds to class x new If the prediction is i type, the number of votes obtained by i type is added with 1; the final class with the highest number of votes is the final predicted final wins for the new haptic data TS2 to be classified.
Example one
S1, performing normalization operation on pressure data obtained by a pressure sensor arranged on the manipulator to obtain tactile data TS;
s2, position coding and embedding the tactile data TS to obtain tactile data TS 2;
s3, calculating the correlation of the haptic data TS2 of different time series input to the multi-head coding self-attention layer to obtain the output Out of the multi-head coding self-attention layer, inputting the Out to the feedforward neural network layer to obtain the output of the haptic data coding layerF 1F 2F 3 AndF 4
s4 outputting of a haptic data encoding layerF 1F 2F 3 AndF 4 inputting the input into a haptic data support vector network, and performing accumulated voting by using M constructed binary _ SVM, wherein the mode class with the most votes is the final predicted final _ wins.
The specific working process of the invention is described above, and the steps are repeated when the device is used next time.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The present invention and its embodiments have been described above, and the description is not intended to be limiting, and the drawings are only one embodiment of the present invention, and the actual structure is not limited thereto. In summary, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A manipulator touch data representation and recognition method based on SVM is characterized in that: the system comprises a tactile data input layer, a tactile data coding layer, a tactile data output end and a tactile data support vector network; the tactile data coding layer comprises a multi-head coding self-attention layer and a feedforward neural network layer; the haptic data TS is subjected to position coding embedding to obtain haptic data TS2, the reason for the position coding is that the processing mode of the haptic data TS is one-time input centralized processing, so that the time sequence relation among data is ignored, so that the position coding needs to be carried out on the haptic data TS to describe the sequential position relation among all components of the haptic data TS, and the specific calculation formula of the position coding is as follows:
Figure 683457DEST_PATH_IMAGE001
in the above formula, d is the dimension of the haptic data TS2, p i Representing the position-encoded component at position i.
2. An SVM-based manipulator haptic data representation recognition method according to claim 1, wherein: the multi-head coding self-attention layer calculates the correlation of the haptic data TS2 of different time sequences input into the multi-head coding self-attention layer, and the specific operation steps are as follows:
s1, generating a characteristic matrix W with the value range of each component between-1 and 1 q 、W k And W v The feature matrix W q 、W k And W v Set as non-modifiable, feature matrix W q 、W k And W v All the dimensions of (A) are 24 multiplied by 8;
s2 passing through feature matrix W q 、W k And W v Generating a search matrix Q, a key matrix K and a value matrix V, wherein the specific calculation formula is as follows:
Figure 427423DEST_PATH_IMAGE002
the dimensionality of the search matrix Q, the key matrix K and the value matrix V obtained by calculation in the formula is 4 multiplied by 8;
s3, calculating Attention Score Attention-Score, wherein the specific calculation formula is as follows:
Figure 539735DEST_PATH_IMAGE003
in the above formula, d k The dimension of the Attention Score Attention-Score is 4 × 8 as a scaling factor;
s4, introducing a Multi-head mechanism to calculate Multi-head Attention Score Multi-Self-Attention, repeatedly executing S1, S2 and S3 to generate 3 Attention Score-scores, and splicing the three Attention scores according to columns to obtain Multi-head Attention Score Multi-Self-Attention, wherein the dimensionality of the Multi-head Attention Score Multi-Self-Attention is 4 multiplied by 24;
s5, performing residual addition operation to obtain the output Out of the multi-head coding self-attention layer, wherein the specific calculation formula is as follows:
Figure 343743DEST_PATH_IMAGE004
in the above equation, Out represents the output of a multi-headed code from the attention Layer, Layer _ Norm represents the Layer normalization, and the dimension of Out is 4 × 24, which is the same as the dimension of the haptic data TS2, and each line is represented by C1, C2, C3, and C4, which are each 1 × 24 in dimension.
3. An SVM-based manipulator haptic data representation recognition method according to claim 2, wherein: the feedforward neural network layer comprises 4 BP neural networks, each BP neural network comprises a first middle hidden layer and a second middle hidden layer, each first middle hidden layer and each second middle hidden layer respectively comprise 24 neurons, the input of each BP neural network is respectively C1, C2, C3 and C4, and the calculation step is as follows:
inputting C1, C2, C3 and C4 into corresponding BP neural networks respectively to be calculatedF 1F 2F 3F 4
Figure 793703DEST_PATH_IMAGE005
In the above formula, the first and second carbon atoms are,b 1 indicating the offset in the middle of the first layer,b 2 indicating the bias in the middle of the second layer,W 1 is the inner star weight vector of the first intermediate hidden layer, W 2 Is the inner star weight vector of the middle hidden layer of the second layer,F i for the output of each BP neural network, specificallyF 1F 2F 3F 4
4. An SVM based manipulator haptic data representation recognition method according to claim 3, wherein: the haptic data support vector network is a strong machine learning method and is used for carrying out final classification on output data of a haptic data coding layer, and the specific calculation steps are as follows:
s1, setting training set data to have M mode classes, and constructing a bank _ SVM between every two classes by using a one-against-one method in the SVM, so that M bank _ SVM needs to be constructed in total, and for the ith mode class and the jth mode class, the calculation mode for constructing the bank _ SVM is as follows:
Figure 341359DEST_PATH_IMAGE006
in the above formula, superscripts i and j represent parameters between the i-th class and the j-th class, subscript t represents indexes of the i-th class and the j-th class samples, ψ represents nonlinear mapping from an input space to a feature space, a parameter to be solved for the above formula is equivalent to solving a dual problem of the above formula, and a decision function expression for judging between the i-th class and the j-th class after the solution is:
Figure 308178DEST_PATH_IMAGE007
in the above formula, x new For haptic data TS2 for classification, for x new Whether it belongs to class i or j;
s2, classifying the new haptic data TS2 to be classified by adopting a voting strategy, wherein each binary _ SVM corresponds to a prediction voting result for the new haptic data TS2 to be classified according to a decision function, taking the binary _ SVM between the i class and the j class as an example, if x is classified, the classification is carried out on the haptic data TS2 to be classified new If the prediction is i type, the number of votes obtained by i type is added with 1; the final class with the highest number of votes is the final predicted final _ wins for the new haptic data TS2 to be classified.
CN202210817681.5A 2022-07-13 2022-07-13 SVM-based manipulator touch data representation and identification method Active CN114898219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210817681.5A CN114898219B (en) 2022-07-13 2022-07-13 SVM-based manipulator touch data representation and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210817681.5A CN114898219B (en) 2022-07-13 2022-07-13 SVM-based manipulator touch data representation and identification method

Publications (2)

Publication Number Publication Date
CN114898219A true CN114898219A (en) 2022-08-12
CN114898219B CN114898219B (en) 2022-11-08

Family

ID=82729307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210817681.5A Active CN114898219B (en) 2022-07-13 2022-07-13 SVM-based manipulator touch data representation and identification method

Country Status (1)

Country Link
CN (1) CN114898219B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330898A (en) * 2022-08-24 2022-11-11 晋城市大锐金马工程设计咨询有限公司 Improved Swin transform-based magazine, book and periodical advertisement embedding method
CN116150684A (en) * 2023-01-17 2023-05-23 中国科学院自动化研究所 Attention mechanism-based haptic attribute identification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110141191A (en) * 2019-05-22 2019-08-20 中国标准化研究院 The tactile evaluating ability test reference and roughness perception test method of roughness
CN112801280A (en) * 2021-03-11 2021-05-14 东南大学 One-dimensional convolution position coding method of visual depth self-adaptive neural network
US20220080598A1 (en) * 2020-09-17 2022-03-17 Honda Motor Co., Ltd. Systems and methods for visuo-tactile object pose estimation
CN114332549A (en) * 2022-01-04 2022-04-12 中国科学院成都生物研究所 Deformable body identification method based on BP neural network unit
CN114462567A (en) * 2021-12-15 2022-05-10 西安邮电大学 Attention mechanism-based neural network model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110141191A (en) * 2019-05-22 2019-08-20 中国标准化研究院 The tactile evaluating ability test reference and roughness perception test method of roughness
US20220080598A1 (en) * 2020-09-17 2022-03-17 Honda Motor Co., Ltd. Systems and methods for visuo-tactile object pose estimation
CN112801280A (en) * 2021-03-11 2021-05-14 东南大学 One-dimensional convolution position coding method of visual depth self-adaptive neural network
CN114462567A (en) * 2021-12-15 2022-05-10 西安邮电大学 Attention mechanism-based neural network model
CN114332549A (en) * 2022-01-04 2022-04-12 中国科学院成都生物研究所 Deformable body identification method based on BP neural network unit

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHEN ZHANG ET AL.: "Hardness recognition of fruits and vegetables based on tactile array information of manipulator", 《COMPUTERS AND ELECTRONICS IN AGRICULTURE》 *
周嵘等: "基于神经网络的触觉感知方向识别研究", 《武汉理工大学学报(信息与管理工程版)》 *
崔少伟等: "基于视触融合的机器人抓取滑动检测", 《华中科技大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330898A (en) * 2022-08-24 2022-11-11 晋城市大锐金马工程设计咨询有限公司 Improved Swin transform-based magazine, book and periodical advertisement embedding method
CN116150684A (en) * 2023-01-17 2023-05-23 中国科学院自动化研究所 Attention mechanism-based haptic attribute identification method and device

Also Published As

Publication number Publication date
CN114898219B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
Fu et al. Deep residual LSTM with domain-invariance for remaining useful life prediction across domains
CN114898219B (en) SVM-based manipulator touch data representation and identification method
Ahmad et al. Some solutions to the missing feature problem in vision
Dhurandhar et al. Tip: Typifying the interpretability of procedures
CN113344206A (en) Knowledge distillation method, device and equipment integrating channel and relation feature learning
CN110543566B (en) Intention classification method based on self-attention neighbor relation coding
Li et al. Nuclear norm regularized convolutional Max Pos@ Top machine
CN113705238B (en) Method and system for analyzing aspect level emotion based on BERT and aspect feature positioning model
Zhang et al. Rich feature combination for cost-based broad learning system
Zhang Application of artificial intelligence recognition technology in digital image processing
Rodzin et al. Deep learning techniques for natural language processing
Liu et al. Heterogeneous unsupervised domain adaptation based on fuzzy feature fusion
CN114743018A (en) Image description generation method, device, equipment and medium
Li et al. Haptic recognition using hierarchical extreme learning machine with local-receptive-field
Sharma et al. A framework for image captioning based on relation network and multilevel attention mechanism
Prashanth et al. Book detection using deep learning
Alnabih et al. Arabic Sign Language letters recognition using vision transformer
Juan et al. Utilization of artificial intelligence techniques for photovoltaic applications
Nouri Handwritten digit recognition by deep learning for automatic entering of academic transcripts
Vrábel et al. Artificial neural networks for classification
Chen et al. Optimize the Performance of the Neural Network by using a Mini Dataset Processing Method
Ilayarani et al. Dichotomic Prediction of an Event using Non Deterministic Finite Automata
Guh et al. Fast and accurate recognition of control chart patterns using a time delay neural network
Shrivastava Adma: A Flexible Loss Function for Neural Networks
Pandey et al. Handwritten Text Conversion By Using ANN Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant