CN111507425A - Mode identification method and system based on air bag array touch perception - Google Patents

Mode identification method and system based on air bag array touch perception Download PDF

Info

Publication number
CN111507425A
CN111507425A CN202010350629.4A CN202010350629A CN111507425A CN 111507425 A CN111507425 A CN 111507425A CN 202010350629 A CN202010350629 A CN 202010350629A CN 111507425 A CN111507425 A CN 111507425A
Authority
CN
China
Prior art keywords
training
air bag
model
data
skin sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010350629.4A
Other languages
Chinese (zh)
Inventor
皮阳军
颜泽荣
刘凡
郭琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202010350629.4A priority Critical patent/CN111507425A/en
Publication of CN111507425A publication Critical patent/CN111507425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01LMEASURING FORCE, STRESS, TORQUE, WORK, MECHANICAL POWER, MECHANICAL EFFICIENCY, OR FLUID PRESSURE
    • G01L25/00Testing or calibrating of apparatus for measuring force, torque, work, mechanical power, or mechanical efficiency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Force Measurement Appropriate To Specific Purposes (AREA)

Abstract

The invention discloses a mode recognition method and a system based on airbag array tactile perception, which mainly comprise the following steps of firstly, acquiring tactile data of different objects according to a skin perception system of an airbag array robot, training by utilizing the tactile perception data to obtain a mode recognition algorithm model, and finally, recognizing by utilizing the machine learning algorithm model aiming at the contact type of unknown tactile perception data.

Description

Mode identification method and system based on air bag array touch perception
Technical Field
The invention relates to the technical field of robot skin sensing, in particular to a mode identification method and system based on air bag array touch sensing.
Background
Along with the wide application of the robot technology in the fields of industry and biomedical engineering, the intelligent robot needs to execute man-machine cooperative work in various complex and changeable multi-information environments. When the vision of the robot is limited, the sense of touch is an important source for receiving external information, and the robot skin is used as a medium for interacting with the environment, and the sense of touch can provide effective environment information feedback for the robot to perform multiple tasks. The traditional robot skin pattern recognition method usually relies on object contact under a limited pose and a limited position for touch information acquisition, and the touch perception pattern recognition method only refers to a single machine learning algorithm training network, so that the recognition accuracy is low, and the problems of overfitting and the like easily exist.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for pattern recognition of airbag array type tactile sensing, which ensure accuracy, reliability and applicability.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides a mode identification method based on air bag array tactile perception, which is characterized by comprising the following steps: the method comprises the following steps:
acquiring touch data of the air bag array robot skin sensor in different contact modes;
extracting the characteristics of the tactile data information collected by the skin of the air bag and storing the characteristic information;
establishing a fusion algorithm model for object recognition;
inputting characteristic information and environmental information to identify the shape and the object by using a fusion algorithm model;
and outputting the object shape and the object.
Further, the skin sensors are arranged on the robot in an array mode, and static characteristic calibration is carried out by fixing the skin sensors on the horizontal testing platform during calibration of the skin sensors.
Further, the skin sensor contacts the balloon in different ways.
Further, the characteristic information is a data characteristic vector extracted by the airbag unit, and the data characteristic vector comprises an airbag array unit air pressure value and an air pressure variation amount per unit time.
Further, the fusion algorithm model is established by adopting a Stacking fusion strategy, and the method specifically comprises the following steps:
s1: initializing an acquired touch data set, and dividing a training set and a testing set;
s2: performing a first layer of training, performing model training on a training set through K-fold cross validation, and obtaining prediction results on the training set and a test set;
s3: and taking the training result of the first layer of model, namely the prediction result of the training set and the prediction result of the test set, as the meta-features of the training set and the test set in the second layer of model, and performing Stacking second layer training by adopting a logistic regression classification method to further obtain the final prediction result.
The invention also provides a pattern recognition system based on the tactile perception of the airbag array, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the following steps:
acquiring touch data of the air bag array robot skin sensor in different contact modes;
extracting the characteristics of the tactile data information collected by the skin of the air bag and storing the characteristic information;
establishing a fusion algorithm model for object recognition;
inputting characteristic information and environmental information, and identifying the shape and the object by using a fusion algorithm model;
and outputting the object shape and the object.
Further, the skin sensors are arranged on the robot in an array mode, and static characteristic calibration is carried out by fixing the skin sensors on the horizontal testing platform during calibration of the skin sensors.
Further, the skin sensor contacts the balloon in different ways.
Further, the characteristic information is a data characteristic vector extracted by the airbag unit, and the data characteristic vector comprises an airbag array unit air pressure value and an air pressure variation amount per unit time.
Further, the fusion algorithm model is established by adopting a Stacking fusion strategy, and the method specifically comprises the following steps:
s1: initializing an acquired touch data set, and dividing a training set and a testing set;
s2: performing a first layer of training, performing model training on a training set through K-fold cross validation, and obtaining prediction results on the training set and a test set;
s3: and taking the training result of the first layer of model, namely the prediction result of the training set and the prediction result of the test set, as the meta-features of the training set and the test set in the second layer of model, and performing Stacking second layer training by adopting a logistic regression classification method to further obtain the final prediction result.
The invention has the beneficial effects that:
according to the mode recognition method based on the air bag array tactile perception, the tactile data of different objects are collected according to the air bag array robot skin perception system, the tactile perception data are used for training to obtain the mode recognition algorithm model, and finally, the machine learning algorithm model is used for recognizing the contact type of unknown tactile perception data.
The method adopts a main flow distributed framework of XG-Boost and L ightGBM and a fusion algorithm of multiple machine learning algorithms such as a random forest, integrates the advantages of complexity self-control and noise data robustness of the XG-Boost model, constructs a fusion pattern recognition algorithm based on a Stacking integration strategy, improves the model prediction effect, has good robustness for a large data set, optimizes L ightGBM memory and parallel calculation, and is simple and stable for the random forest.
The method performs training on the tactile data extracted from the dynamic environment with the change of the contact angle of the object and the contact position of the airbag array, so that the trained model has better robustness. The mode recognition is completed by using the geometric characteristics of objects with different shapes and different physical objects (including man-machine interaction and object contact), so that the method is beneficial to the analysis of touch exploration behaviors and the information supplement of obstacle avoidance on one hand, and the vacancy that the traditional robot does not contain a man-machine interaction type in the sense of touch of the skin is made up on the other hand.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a block flow diagram of airbag skin sensor object model identification.
FIG. 2 is a schematic diagram of a Stacking model integration strategy flow.
Detailed Description
The present invention is further described with reference to the following drawings and specific examples so that those skilled in the art can better understand the present invention and can practice the present invention, but the examples are not intended to limit the present invention.
Example 1
As shown in fig. 1, in the pattern recognition method for airbag array type tactile sensing provided by this embodiment, the method trains the collected tactile data to obtain a pattern recognition model for the tactile sensing process of the airbag array robot skin sensor, and realizes the recognition of new data by using the method obtained by training.
According to the method provided by the embodiment, the air bag array acquires air pressure information when contacting an object by using the flexible robot skin sensor based on the air bag array, and a machine learning fusion algorithm model is trained to identify the shape type and the object type of the skin contacting the robot; and the tactile data extracted from the dynamic environment with the changed contact angle of the object and the changed contact position of the airbag array are trained, so that the trained model has better robustness. The mode recognition is completed by using the geometric characteristics of objects with different shapes and different physical objects (including man-machine interaction and object contact), so that the method is beneficial to the analysis of touch exploration behaviors and the information supplement of obstacle avoidance on one hand, and the vacancy that the traditional robot does not contain a man-machine interaction type in the sense of touch of the skin is made up on the other hand.
The pattern recognition method based on the airbag array tactile perception provided by the embodiment comprises the following steps:
1. acquiring touch data of the air bag array robot skin sensor in different contact modes;
the skin sensor of the embodiment respectively acquires the tactile data in different contact modes from the change of the contact angle of an object and the contact position of the air bag array;
2. extracting the characteristics of the air pressure touch data information collected by the skin of the air bag and storing the characteristic information;
3. training a fusion algorithm model according to the extracted data;
4. completing the identification of the object shape and the object by using the obtained fusion model;
the fusion algorithm model provided by the embodiment identifies the object by establishing a machine learning model; the contact category is predicted by being based on machine learning models and other environmental information.
Example 2
The novel machine learning algorithm fusion mode identification method established based on the air bag array touch perception skin sensor provided by the embodiment needs to carry out data acquisition and response dynamic characteristic measurement on an air bag array touch perception mechanism so as to ensure the reliability and accuracy of experimental data, and comprises the following steps:
step 1, aiming at an air bag array skin sensor attached to an intelligent robot, n air bag units are respectively arranged in the transverse direction and the longitudinal direction to form an n x n linear array, the n x n linear array is calibrated in a specific test environment, and the reliability of the touch sensing of the skin sensor is tested;
the air bag array skin sensor provided by the embodiment is fixed on a horizontal test platform, and static characteristics of the skin sensor are calibrated to obtain the static characteristics of rigidity, force displacement and the like of the air bag array skin. The embodiment mainly applies an MTS pressure tester to calibrate the relevant static characteristics of the air bag array skin sensor, including the characteristics of calibrating tensile characteristics, variable rigidity and the like.
Fixing the packaged closed air bag unit model on a horizontal tray, applying vertical displacement through a weight pressing machine above a press machine, detecting the static (deformation resilience) characteristics of a skin sensor unit under the action of different displacements, changing the initial internal air pressure to obtain the variable rigidity characteristic, and acquiring weight pressing gravity information and storing data in real time by a demodulator upper computer; and finally, stable data are obtained by reading a plurality of groups of test data and carrying out abnormal value processing.
Step 2, controlling objects and objects of different types to contact the air bag array skin sensor in different modes to acquire touch sensing data;
the air bag is spliced according to the array, an air bag array skin sensing system is built, objects and objects of different types are controlled to contact the air bag array skin sensor in different modes, and touch sensing data are obtained;
the air bag array can be installed and fixed on a supporting plane, and the tactile perception data of each object under different working condition modes can be obtained by controlling a variable method.
Step 3, establishing a feature database through data feature vectors extracted by the air bag units, wherein the data feature vectors consist of n × n air bag array unit air pressure numerical values and unit time variation quantities thereof to form m-dimensional feature vectors W [ p0... p (n × n), q0 … q (n × n) ];
step 4, establishing a fusion algorithm model, taking a decision tree as baseline, fusing XG-Boost, L ightGBM and random forest, and training by adopting a fusion strategy shown in FIG. 2 to obtain a final fusion algorithm model;
and 5, recognizing the external contact object by using the trained fusion algorithm model.
The pattern recognition model of the embodiment is fused with a plurality of machine learning algorithms, so that the advantages of each model can be exerted, and relatively weak models are combined through a certain strategy to achieve a relatively strong prediction result. Data acquisition is based on the change of the position and the angle direction of the object contacting the air bag skin sensor, so that the trained learning network has better robustness.
Example 3
As shown in fig. 2, fig. 2 is a schematic diagram of a Stacking model integration strategy flow, the fusion algorithm model provided in this embodiment is established by fusing a machine learning algorithm with a Stacking hierarchical model integration method, and the Stacking hierarchical model integration method is an effective method for solving fusion between heterogeneous models, and can effectively improve generalization capability of the model, reduce overfitting phenomenon, and improve overall effect of the model, and the specific steps are as follows:
s1, initializing a data set, wherein m is 1 and represents the mth single model, and the single model comprises one of decision trees, random forests, XG-Boost or L ightGBM;
s2: setting i to be 1, wherein i represents the number of cross validation;
s3: performing cross training on the single model, and then enabling i to be i + 1;
s4: obtaining the prediction results of the training set and the test set under the single model;
s5: judging whether i < ═ 5 is established, if so, returning to the step S3;
s6: if not, making m equal to m + 1;
s7: judging whether m < 4 is true, if yes, returning to the step S2;
s8, obtaining a training set prediction result Pm (m ∈ 1,2,3,4) and a test set prediction result Tn (n ∈ 1,2,3, 4);
s9: using a logistic regression model to train a second layer of models, namely using meta-features obtained by training the first layer of models as features of a logistic regression algorithm, using a sigmoid function as an activation function, giving different weights to different features in the training process, obtaining probability values of different classes of each sample, and taking the class with the highest probability value as a final prediction class;
s10: and predicting the test set.
In the Stacking integrated model fusion process provided by this embodiment, first, an original data set I after preprocessing is extracted, 80% of the original data set is divided into a training set P, 20% of the original data set is divided into a test set T, the training set P is divided into 5 parts Pi (I ∈ 1,2,3,4,5), four parts of the training set are used as training set training models, the other part is used as a verification set, after prediction results of the P verification set and the test set are obtained, positions are rotated, that is, five times of cross-validation are performed, so that a complete prediction result Pm of the training set is obtained, five times of prediction is performed on the test set due to five times of cross-validation, so that a prediction result Tn of the test set is obtained, and the above steps are repeated on four single models respectively, so that a prediction result Tn of the training set (Pm ∈ 1,2,3,4) and a prediction result Tn (n ∈ 1,2,3,4) of the test set are obtained respectively.
And taking Pm and Tn as meta-features of a training set and a test set in the second-layer model, and performing Stacking second-layer training by adopting a logistic regression classification method to further obtain a final prediction result. In the logistic regression classification method provided by this embodiment, a sigmoid function is used as an activation function, different weights are given to different features in a training process, a probability value that each sample belongs to different classes is obtained, and a class with the highest probability value is taken as a final prediction class.
The XG-Boost, L ightGBM mainstream distributed framework and random forest and other machine learning algorithm fusion algorithms are adopted, the fusion process is realized through a Starking process, the complexity self-control and noise data robustness advantages of the XG-Boost model are integrated, L ightGBM memory and parallel computing optimization are realized, the random forest is simple and stable, the novel fusion algorithm obtained through training has higher accuracy, reliability and application range, and the defects of the traditional mode identification method are overcome.
Example 4
The embodiment provides a pattern recognition system based on airbag array tactile perception, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, and is characterized in that: the processor implements the following steps when executing the program:
acquiring touch data of the air bag array robot skin sensor in different contact modes;
extracting the characteristics of the tactile data information collected by the skin of the air bag and storing the characteristic information;
establishing a fusion algorithm model for object recognition;
inputting characteristic information and environmental information to identify the shape and the object by using a fusion algorithm model;
and outputting the object shape and the object.
The processor stores the fusion algorithm model in the method, the model is established by adopting a Stacking fusion strategy, and the method comprises the following specific steps:
initializing an acquired touch data set, and dividing a training set and a testing set; performing a first layer of training, performing model training on a training set through K-fold cross validation, and obtaining prediction results on the training set and a test set; and taking the training result of the first layer of model, namely the prediction result of the training set and the prediction result of the test set, as the meta-features of the training set and the test set in the second layer of model, and performing Stacking second layer training by adopting a logistic regression classification method to further obtain the final prediction result.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (10)

1. The mode identification method based on the air bag array tactile perception is characterized in that: the method comprises the following steps:
acquiring touch data of the air bag array robot skin sensor in different contact modes;
extracting the characteristics of the tactile data information collected by the skin of the air bag and storing the characteristic information;
establishing a fusion algorithm model for object recognition;
inputting characteristic information and environmental information, and identifying the shape and the object by using a fusion algorithm model;
and outputting the object shape and the object.
2. The method of claim 1, wherein: the skin sensor is arranged on the robot in a linear array mode, and static characteristic calibration is carried out by fixing the skin sensor on a horizontal testing platform during calibration of the skin sensor.
3. The method of claim 1, wherein: the skin sensor contacts the balloon in different ways.
4. The method of claim 1, wherein: the characteristic information is a data characteristic vector extracted by the air bag unit, and the data characteristic vector comprises an air bag array unit air pressure numerical value and unit time air pressure variation.
5. The method of claim 1, wherein: the fusion algorithm model is established by adopting a Stacking fusion strategy, and the method specifically comprises the following steps:
s1: initializing an acquired touch data set, and dividing a training set and a testing set;
s2: performing a first layer of training, performing model training on a training set through K-fold cross validation, and obtaining prediction results on the training set and a test set;
s3: and taking the training result of the first layer of model, namely the prediction result of the training set and the prediction result of the test set, as the meta-features of the training set and the test set in the second layer of model, and performing Stacking second layer training by adopting a logistic regression classification method to further obtain the final prediction result.
6. A pattern recognition system based on haptic perception of an airbag array, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein: the processor implements the following steps when executing the program:
acquiring touch data of the air bag array robot skin sensor in different contact modes;
extracting the characteristics of the tactile data information collected by the skin of the air bag and storing the characteristic information;
establishing a fusion algorithm model for object recognition;
inputting characteristic information and environmental information, and identifying the shape and the object by using a fusion algorithm model;
and outputting the object shape and the object.
7. The system of claim 6, wherein: the skin sensor is arranged on the robot in an array mode, and static characteristic calibration is carried out by fixing the skin sensor on a horizontal testing platform during calibration of the skin sensor.
8. The system of claim 6, wherein: the skin sensor contacts the balloon in different ways.
9. The system of claim 6, wherein: the characteristic information is a data characteristic vector extracted by the air bag unit, and the data characteristic vector comprises an air bag array unit air pressure numerical value and unit time air pressure variation.
10. The system of claim 6, wherein: the fusion algorithm model is established by adopting a Stacking fusion strategy, and the method specifically comprises the following steps:
s1: initializing an acquired touch data set, and dividing a training set and a testing set;
s2: performing a first layer of training, performing model training on a training set through K-fold cross validation, and obtaining prediction results on the training set and a test set;
s3: and taking the training result of the first layer of model, namely the prediction result of the training set and the prediction result of the test set, as the meta-features of the training set and the test set in the second layer of model, and performing Stacking second layer training by adopting a logistic regression classification method to further obtain the final prediction result.
CN202010350629.4A 2020-04-28 2020-04-28 Mode identification method and system based on air bag array touch perception Pending CN111507425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350629.4A CN111507425A (en) 2020-04-28 2020-04-28 Mode identification method and system based on air bag array touch perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350629.4A CN111507425A (en) 2020-04-28 2020-04-28 Mode identification method and system based on air bag array touch perception

Publications (1)

Publication Number Publication Date
CN111507425A true CN111507425A (en) 2020-08-07

Family

ID=71877012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350629.4A Pending CN111507425A (en) 2020-04-28 2020-04-28 Mode identification method and system based on air bag array touch perception

Country Status (1)

Country Link
CN (1) CN111507425A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112212898A (en) * 2020-09-09 2021-01-12 山东科技大学 Intelligent skin based on small-size distributed optical fiber sensing array
CN112668607A (en) * 2020-12-04 2021-04-16 深圳先进技术研究院 Multi-label learning method for recognizing tactile attributes of target object
CN113008418A (en) * 2021-02-26 2021-06-22 福州大学 Flexible tactile sensor of pressure drag type
CN113510700A (en) * 2021-05-19 2021-10-19 哈尔滨理工大学 Touch perception method for robot grabbing task
CN114046809A (en) * 2021-11-11 2022-02-15 广东省科学院半导体研究所 Optical sensing device, sensing equipment and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130166484A1 (en) * 2009-07-30 2013-06-27 Mitra J. Hartmann Systems, methods, and apparatus for 3-d surface mapping, compliance mapping, and spatial registration with an array of cantilevered tactile hair or whisker sensors
CN104019939A (en) * 2014-06-18 2014-09-03 合肥工业大学 Multi-dimensional force loading and calibrating device of touch sensor
CN105718884A (en) * 2016-01-20 2016-06-29 浙江大学 Object classification method based on multi-finger manipulator touch sense information characteristic extraction
CN105956351A (en) * 2016-07-05 2016-09-21 上海航天控制技术研究所 Touch information classified computing and modelling method based on machine learning
CN107053254A (en) * 2017-01-24 2017-08-18 重庆大学 Wearable robot skin based on multi-layer airbag
CN109359193A (en) * 2018-09-25 2019-02-19 济南大学 The abnormal phone recognition methods and system of two layers of frame of accumulation based on PCA dimensionality reduction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130166484A1 (en) * 2009-07-30 2013-06-27 Mitra J. Hartmann Systems, methods, and apparatus for 3-d surface mapping, compliance mapping, and spatial registration with an array of cantilevered tactile hair or whisker sensors
CN104019939A (en) * 2014-06-18 2014-09-03 合肥工业大学 Multi-dimensional force loading and calibrating device of touch sensor
CN105718884A (en) * 2016-01-20 2016-06-29 浙江大学 Object classification method based on multi-finger manipulator touch sense information characteristic extraction
CN105956351A (en) * 2016-07-05 2016-09-21 上海航天控制技术研究所 Touch information classified computing and modelling method based on machine learning
CN107053254A (en) * 2017-01-24 2017-08-18 重庆大学 Wearable robot skin based on multi-layer airbag
CN109359193A (en) * 2018-09-25 2019-02-19 济南大学 The abnormal phone recognition methods and system of two layers of frame of accumulation based on PCA dimensionality reduction

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112212898A (en) * 2020-09-09 2021-01-12 山东科技大学 Intelligent skin based on small-size distributed optical fiber sensing array
CN112668607A (en) * 2020-12-04 2021-04-16 深圳先进技术研究院 Multi-label learning method for recognizing tactile attributes of target object
CN113008418A (en) * 2021-02-26 2021-06-22 福州大学 Flexible tactile sensor of pressure drag type
CN113510700A (en) * 2021-05-19 2021-10-19 哈尔滨理工大学 Touch perception method for robot grabbing task
CN114046809A (en) * 2021-11-11 2022-02-15 广东省科学院半导体研究所 Optical sensing device, sensing equipment and system
CN114046809B (en) * 2021-11-11 2024-04-26 广东省科学院半导体研究所 Optical sensing device, sensing equipment and system

Similar Documents

Publication Publication Date Title
CN111507425A (en) Mode identification method and system based on air bag array touch perception
Calandra et al. The feeling of success: Does touch sensing help predict grasp outcomes?
Marwala Computational Intelligence for Missing Data Imputation, Estimation, and Management: Knowledge Optimization Techniques: Knowledge Optimization Techniques
US9690906B2 (en) Living object investigation and diagnosis using a database of probabilities pertaining to ranges of results
CN111189577A (en) Sensor calibration and data measurement method, device, equipment and storage medium
Sun et al. Machine learning for haptics: Inferring multi-contact stimulation from sparse sensor configuration
Semwal et al. Tracking of fall detection using IMU sensor: An IoHT application
Veres et al. Incorporating object intrinsic features within deep grasp affordance prediction
Dawood et al. Incremental episodic segmentation and imitative learning of humanoid robot through self-exploration
Zhang et al. A Relation B-cell Network used for data identification and fault diagnosis
US11276285B2 (en) Artificial intelligence based motion detection
CN116531094A (en) Visual and tactile fusion navigation method and system for cornea implantation operation robot
Gao et al. On explainability and sensor-adaptability of a robot tactile texture representation using a two-stage recurrent networks
CN114707399A (en) Decoupling method of six-dimensional force sensor
Grover et al. Under pressure: Learning to detect slip with barometric tactile sensors
Riffo et al. Object recognition using tactile sensing in a robotic gripper
Liao et al. Enhancing robotic tactile exploration with multireceptive graph convolutional networks
Kozyr et al. Determining distance to an object and type of its material based on data of capacitive sensor signal and machine learning techniques
Jazouli et al. Stereotypical motor movement recognition using microsoft kinect with artificial neural network
Shen et al. Mapping of probe pretravel in dimensional measurements using neural networks computational technique
CN112802182B (en) Method and system for reconstructing anthropomorphic touch object based on touch sensor
Rouhafzay et al. A virtual tactile sensor with adjustable precision and size for object recognition
Magnussen et al. Continuous feature networks: A novel method to process irregularly and inconsistently sampled data with position-dependent features
Orii et al. Recurrent neural network for tactile texture recognition using pressure and 6-axis acceleration sensor data
Ma et al. A data-driven robotic tactile material recognition system based on electrode array bionic finger sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination