CN111079333A - Flexible touch sensor deep learning sensing method - Google Patents

Flexible touch sensor deep learning sensing method Download PDF

Info

Publication number
CN111079333A
CN111079333A CN201911314290.6A CN201911314290A CN111079333A CN 111079333 A CN111079333 A CN 111079333A CN 201911314290 A CN201911314290 A CN 201911314290A CN 111079333 A CN111079333 A CN 111079333A
Authority
CN
China
Prior art keywords
deep learning
flexible touch
sensor
touch sensor
sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911314290.6A
Other languages
Chinese (zh)
Other versions
CN111079333B (en
Inventor
刘旺玉
郭正强
谢卫规
苟竞仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201911314290.6A priority Critical patent/CN111079333B/en
Publication of CN111079333A publication Critical patent/CN111079333A/en
Application granted granted Critical
Publication of CN111079333B publication Critical patent/CN111079333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Force Measurement Appropriate To Specific Purposes (AREA)

Abstract

The invention discloses a flexible touch sensor deep learning perception method which comprises the following steps: establishing a mechanical model of a sensor unit structure; establishing a sensor array structure mechanical model; obtaining a measured data set; obtaining a finite element simulation data set; data resolution is improved by data set fusion; and (4) combining deep learning to establish a perception mechanism model. According to the invention, the data sets are fused, and the relationship between the pressure signal and the structure of the detection object such as three-dimensional multi-scale geometric dimension, surface morphology and physical properties is obtained by means of a deep learning model.

Description

Flexible touch sensor deep learning sensing method
Technical Field
The invention relates to the field of flexible touch sensor sensing, in particular to a flexible touch sensor deep learning sensing method.
Background
In the perception mechanism discretization numerical modeling, the numerical simulation precision based on the finite element depends on the size of the division of the finite element unit, and meanwhile, the contradiction between the calculation precision and the calculation efficiency is brought. On one hand, if a high-precision perception effect needs to be obtained, smaller and more finite element units are needed; on the other hand, more finite element units will bring the reduction of the solving speed, and when the finite element units reach a certain scale, the finite element model using the contact mechanics can not realize real-time numerical solving. At this time, a faster perception mechanism method is required to realize the real-time prediction of the sensor force signal and the perception model.
The perceptual mathematical model based on contact mechanics is a highly non-linear model. In recent years, with the development of machine learning technology, deep learning attracts more and more attention with its excellent nonlinear fitting capability, and is widely applied to semantic perception problems such as robot body recognition, object detection, semantic segmentation, and the like. In addition, deep learning extends two-dimensional image-based perception into three-dimensional space on some quantitative problems, such as object pose estimation, motion estimation, and the like. The great success of these tasks illustrates the ability to solve the quantitative estimation problem with deep learning. However, the existing sensing methods such as CN106446948A only stay in the machine learning level, and the sensing mechanism research has not been performed by integrating the measured data, the finite element simulation data, and the deep learning means to improve the sensing accuracy and efficiency.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a flexible touch sensor deep learning perception method, which is characterized in that the relation between a pressure signal and the structure of a detection object such as three-dimensional multi-scale geometric dimension, surface morphology and physical attributes is obtained by means of a deep learning model through fusing the data sets.
The purpose of the invention is realized by the following technical scheme:
the flexible tactile sensor deep learning perception method comprises the following sequential steps:
step 1), establishing a sensor unit structure mechanical model;
step 2), establishing a sensor array structure mechanical model;
step 3), obtaining an actually measured data set;
step 4), obtaining a finite element simulation data set;
step 5), fusing the data sets to improve the data resolution;
and 6) establishing a perception mechanism model by combining deep learning.
The sensor unit is a pyramid sensor unit formed by four sensitive units, and the four sensitive units are arranged in the sensor unit at 2x 2.
The elastic behavior model and the stress-capacitance conversion model of the sensor unit are integrated into a representative sensor unit, and the sensor array is subjected to discrete simplification and modularization to construct a response mechanism of the sensor array.
The actually measured data set is finally established by touching an object with obvious boundary characteristics by using a sensor array. The obtained actually measured database is rough and is used for training.
Further, a fine simulation data set is obtained using finite element simulation.
The data fusion refers to establishing a mapping relation to realize the fusion of the simulation data set and the measured data set.
The establishment of the perception mechanism model comprises high-resolution pressure cloud picture generation, fusion convolution operation, geometric relation reconstruction and microstructure and material attribute reconstruction.
The high resolution pressure cloud generation comprises the following steps:
(i) firstly, obtaining a rough pressure cloud image by a low-resolution pressure cloud image in a linear interpolation mode;
(ii) carrying out down-sampling self-coding operation on the low-resolution pressure cloud image by using convolution operation to obtain a series of characteristic layers;
(iii) and connecting the characteristics of the layers by adopting a U-Net mode to finally obtain a high-resolution pressure cloud picture, thereby obtaining higher precision.
The fusion convolution operation specifically includes: for fusing the boundary and material of an objectThe prime property and the like provide a fusion convolution function which is used in the convolution operation of each layer; let x beiIs the corresponding characteristic value, b is the offset value, N (x)i) Is xiCorresponds to a fusion convolution defined as:
xi=Ψ(Mi)∑w(xj)·(xje mj)+b
where w (-) can be regarded as a weight function, Mi={mjThe matrix is a corresponding adaptive matrix, and e is a multiplication operation, which plays a role in filtering boundary values; in this formula, the function Ψ (M)i) Defined as a function related to the boundary of the object, the microscopic appearance, the material property, etc.; thus, when the function is combined with the convolution operation, the characteristics of the detected object are applied to the high-resolution pressure cloud image reconstruction.
The geometrical relationship reconstruction specifically comprises the following steps: and performing characteristic extraction on the input three-dimensional pressure cloud picture by using a graph convolution neural network, and performing regression reconstruction on the three-dimensional model, so as to acquire the geometric shape of the detection object by using a sensor sensing signal.
The microstructure and material attribute reconstruction specifically comprises the following steps: the method comprises the steps of constructing a corresponding database collection of a three-dimensional pressure cloud picture of a sensor and an object appearance microstructure, material and the like in an actual measurement and finite element simulation mode, then constructing the three-dimensional pressure cloud picture into a 3 xn matrix as an input characteristic, carrying out quantitative regression training by adopting a DenseNet + ReLU as a block multilayer perceptron, and finally outputting a numerical value vector of the corresponding appearance microstructure and material, thereby realizing the acquisition of the relation of the material attribute of the detected object through a sensor sensing signal.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the method utilizes the established mechanical model of the touch sensor to carry out numerical simulation on the touch characteristic object of the sensor array to obtain simulated touch data which is used as an auxiliary database for data acquisition of the sensor entity, and establishes a mapping model to improve the precision of a training data set so as to construct a large-scale force signal and a high-resolution data set of a corresponding three-dimensional structure;
2. an end-to-end deep network is constructed by utilizing the technology of a convolutional neural network to train on a data set and obtain a training model, so that the attributes of the geometry, the surface microstructure, the material and the like of the three-dimensional object are quickly identified by utilizing a force signal acquired by a sensor, and the three-dimensional structure relation with the detected object is constructed.
Drawings
FIG. 1-1 is a schematic diagram of a sensor unit; fig. 1-2 are exploded views of a sensor unit.
Fig. 2 is a schematic view of the contact relationship between the sensor array and the surface feature of the detection object.
Fig. 3 is a schematic diagram of a depth network for generating a high resolution pressure cloud.
Fig. 4 is a schematic diagram of geometric relationship reconstruction based on a three-dimensional pressure cloud image.
Fig. 5 is a schematic diagram of the sensing mechanism.
Wherein the reference numerals are as follows:
1-adaptive convolution operation, 2-connection operation, 3-upper acquisition operation, 4-sensor unit and 5-sensitive unit.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
The invention relates to the field of flexible touch sensor sensing mechanisms, in particular to a flexible touch sensor deep learning sensing method. A three-way mechanical model of the contact state of the surface microstructure of the flexible touch sensor and the three-dimensional microstructure of the surface of the touch object is established by adopting a numerical simulation and experiment method, an actually measured data set and a finite element contact simulation data set are fused, and the relation between a pressure signal and the three-dimensional multi-scale geometric dimension, the surface appearance, the physical property and other structures of the detection object is obtained by means of a neural network deep learning model. The method comprises the following steps:
step 1), establishing a sensor unit structure mechanical model;
step 2), establishing a sensor array structure mechanical model;
step 3), obtaining an actually measured data set;
step 4), obtaining a finite element simulation data set;
step 5), fusing the data sets to improve the data resolution;
and 6) establishing a perception mechanism model by combining deep learning.
The sensor is designed into four sensitive units to form a pyramid sensor unit, and the four sensitive units are arranged in the sensor unit at 2x2, as shown in fig. 1-1 and 1-2. The upper electrode layer and the lower electrode layer of each sensing unit are connected with adjacent units in series, so that array connection of the sensitive units is realized. In combination with a micro-capacitance detection circuit, high resolution tactile sensing can be achieved. Furthermore, an elastic behavior model and a stress-capacitance conversion model of the sensor unit are integrated into a representative sensor unit, Matlab programming is adopted, a sensor array mechanical response mechanism is subjected to modularization and discretization simulation, and a cooperative strain mechanism of force loads in different directions is considered. And touching objects with obvious boundary characteristics by using the sensor array, and establishing a rough actual measurement database for training. By means of a finite element tool, the stress response process of the sensor array facing different microcosmic geometric contact morphologies (shown in figure 2) and other touch characteristics is simulated, touch feedback data under micro boundary disturbance is obtained and used as a fine data set for strengthening and supplementing the defects of an actually measured data set, the simulation data set and the actually measured data set are fused by establishing a mapping relation, namely the physical precision of the sensor and the numerical algorithm precision are combined, and the detection precision and the sensitivity of the sensor are improved. And constructing a model of a perception mechanism by using a depth network, wherein the model comprises high-resolution pressure cloud picture generation, fusion convolution operation, geometric relation reconstruction and microstructure and material attribute reconstruction.
The high resolution pressure cloud picture generating step is shown in fig. 3, (i) firstly, a rough pressure cloud picture is obtained by the low resolution pressure cloud picture in a linear interpolation mode; (ii) carrying out down-sampling self-coding operation on the low-resolution pressure cloud image by using convolution operation to obtain a series of characteristic layers; (iii) connecting the characteristics of the layers by adopting a U-Net mode to obtain high resolutionTo obtain a higher accuracy. A fusion convolution function is provided for attributes such as the boundary and the material of a fusion object and is used in convolution operation of each layer. Let x beiIs the corresponding characteristic value, b is the offset value, N (x)i) Is xiCorresponds to a fusion convolution defined as:
xi=Ψ(Mi)∑w(xj)·(xje mj)+b
where w (-) can be regarded as a weight function, Mi={mjThe matrix is the corresponding adaptive matrix, and e is the multiplication operation, which plays the role of filtering the boundary value. In this formula, the function Ψ (M)i) May be defined as a function related to object boundaries, microscopic appearance, material properties, etc. Thus, when the function is combined with the convolution operation, the characteristics of the detected object can be applied to the high-resolution pressure cloud image reconstruction.
The characteristics of the input three-dimensional pressure cloud image are extracted by using the graph convolution neural network, and then the three-dimensional model is subjected to regression reconstruction, so that the geometric shape of the detection object is obtained by using the sensor sensing signal, and the reconstruction process is shown in fig. 4.
The method comprises the steps of constructing a corresponding database collection of a three-dimensional pressure cloud picture of a sensor and an object appearance microstructure, material and the like through an actual measurement and finite element simulation mode, then constructing the three-dimensional pressure cloud picture into a 3 xn matrix as an input characteristic, carrying out quantitative regression training by adopting a DenseNet + ReLU as a block multilayer perceptron, and finally outputting a numerical value vector of the corresponding appearance microstructure, material and the like, thereby realizing the acquisition of the relation of the material attribute of the detected object through a sensor sensing signal, wherein the sensing mechanism is shown in figure 5.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. The flexible touch sensor deep learning perception method is characterized by comprising the following sequential steps:
step 1), establishing a sensor unit structure mechanical model;
step 2), establishing a sensor array structure mechanical model;
step 3), obtaining an actually measured data set;
step 4), obtaining a finite element simulation data set;
step 5), fusing the data sets to improve the data resolution;
and 6) establishing a perception mechanism model by combining deep learning.
2. The method for sensing the flexible touch sensor through deep learning of claim 1, wherein the sensor unit is a pyramid sensor unit consisting of four sensitive units, and the four sensitive units are arranged at 2x2 inside the sensor unit.
3. The method for sensing the flexible touch sensor through deep learning as claimed in claim 1, wherein the sensor unit is integrated with an elastic behavior model and a stress-capacitance conversion model into a sensor representative unit, and discrete simplification and modularization are performed on the sensor array to construct a response mechanism of the sensor array.
4. The method for sensing the deep learning of the flexible touch sensor according to claim 1, wherein the actually measured data set is finally established by touching an object with obvious boundary characteristics with a sensor array.
5. The method for the deep learning and sensing of the flexible touch sensor according to claim 1, wherein the data fusion refers to establishing a mapping relationship to realize the fusion of a simulation data set and a measured data set.
6. The method for the deep learning and perception of the flexible touch sensor according to claim 1, wherein the establishing of the perception mechanism model comprises high-resolution pressure cloud image generation, fusion convolution operation, geometric relation reconstruction and microstructure and material property reconstruction.
7. The method for the deep learning perception of the flexible touch sensor according to claim 6, wherein the high-resolution pressure cloud image is generated, and the method comprises the following steps:
(i) firstly, obtaining a rough pressure cloud image by a low-resolution pressure cloud image in a linear interpolation mode;
(ii) carrying out down-sampling self-coding operation on the low-resolution pressure cloud image by using convolution operation to obtain a series of characteristic layers;
(iii) and connecting the characteristics of the layers by adopting a U-Net mode to finally obtain a high-resolution pressure cloud picture, thereby obtaining higher precision.
8. The method for sensing the deep learning of the flexible touch sensor according to claim 6, wherein the fusion convolution operation specifically comprises: providing a fusion convolution function for attributes such as the boundary, the material and the like of a fusion object, and using the fusion convolution function in the convolution operation of each layer; let x beiIs the corresponding characteristic value, b is the offset value, N (x)i) Is xiCorresponds to a fusion convolution defined as:
xi=Ψ(Mi)∑w(xj)·(xje mj)+b
where w (-) can be regarded as a weight function, Mi={mjThe matrix is a corresponding adaptive matrix, and e is a multiplication operation, which plays a role in filtering boundary values; in this formula, the function Ψ (M)i) Defined as a function related to the boundary of the object, the microscopic appearance, the material property, etc.; thus, when the function is combined with the convolution operation, the characteristics of the detected object are applied to the high-resolution pressure cloud image reconstruction.
9. The method for sensing the flexible touch sensor through deep learning according to claim 6, wherein the geometric relationship reconstruction specifically comprises: and performing characteristic extraction on the input three-dimensional pressure cloud picture by using a graph convolution neural network, and performing regression reconstruction on the three-dimensional model, so as to acquire the geometric shape of the detection object by using a sensor sensing signal.
10. The method for deep learning and sensing of the flexible touch sensor according to claim 6, wherein the reconstructing of the microstructure and the material property specifically comprises: the method comprises the steps of constructing a corresponding database collection of a three-dimensional pressure cloud picture of a sensor and an object appearance microstructure, material and the like in an actual measurement and finite element simulation mode, then constructing the three-dimensional pressure cloud picture into a 3 xn matrix as an input characteristic, carrying out quantitative regression training by adopting a DenseNet + ReLU as a block multilayer perceptron, and finally outputting a numerical value vector of the corresponding appearance microstructure and material, thereby realizing the acquisition of the relation of the material attribute of the detected object through a sensor sensing signal.
CN201911314290.6A 2019-12-19 2019-12-19 Deep learning sensing method of flexible touch sensor Active CN111079333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911314290.6A CN111079333B (en) 2019-12-19 2019-12-19 Deep learning sensing method of flexible touch sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911314290.6A CN111079333B (en) 2019-12-19 2019-12-19 Deep learning sensing method of flexible touch sensor

Publications (2)

Publication Number Publication Date
CN111079333A true CN111079333A (en) 2020-04-28
CN111079333B CN111079333B (en) 2024-03-12

Family

ID=70315541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911314290.6A Active CN111079333B (en) 2019-12-19 2019-12-19 Deep learning sensing method of flexible touch sensor

Country Status (1)

Country Link
CN (1) CN111079333B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796708A (en) * 2020-06-02 2020-10-20 南京信息工程大学 Method for reproducing three-dimensional shape characteristics of image on touch screen
CN111964821A (en) * 2020-08-05 2020-11-20 清华大学深圳国际研究生院 Pressure touch prediction method and pressure touch prediction model for electronic skin
CN112802182A (en) * 2021-01-20 2021-05-14 同济大学 Anthropomorphic touch object reconstruction method and system based on touch sensor
WO2022111799A1 (en) * 2020-11-24 2022-06-02 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Method for force inference of a sensor arrangement, methods for training networks, force inference module and sensor arrangement
CN114894354A (en) * 2022-04-11 2022-08-12 汕头大学 Pressure perception feedback device based on surface structure color and deep learning identification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030027636A (en) * 2001-09-29 2003-04-07 홍동표 A Sensor capable of sensing objects
CN1539604A (en) * 2003-11-01 2004-10-27 中国科学院合肥智能机械研究所 Flexible touch sensor and touch information detection method
US20130166484A1 (en) * 2009-07-30 2013-06-27 Mitra J. Hartmann Systems, methods, and apparatus for 3-d surface mapping, compliance mapping, and spatial registration with an array of cantilevered tactile hair or whisker sensors
CN110135485A (en) * 2019-05-05 2019-08-16 浙江大学 The object identification and localization method and system that monocular camera is merged with millimetre-wave radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030027636A (en) * 2001-09-29 2003-04-07 홍동표 A Sensor capable of sensing objects
CN1539604A (en) * 2003-11-01 2004-10-27 中国科学院合肥智能机械研究所 Flexible touch sensor and touch information detection method
US20130166484A1 (en) * 2009-07-30 2013-06-27 Mitra J. Hartmann Systems, methods, and apparatus for 3-d surface mapping, compliance mapping, and spatial registration with an array of cantilevered tactile hair or whisker sensors
CN110135485A (en) * 2019-05-05 2019-08-16 浙江大学 The object identification and localization method and system that monocular camera is merged with millimetre-wave radar

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A. GÓMEZ EGUÍLUZ 等: "Multimodal Material identification through recursive tactile sensing", ROBOTICS AND AUTONOMOUS SYSTEMS, vol. 106, 7 May 2018 (2018-05-07), pages 130 - 139 *
余乐 等: "基于卷积神经网络的软硬触觉感知方法研究", 传感器与微系统, vol. 36, no. 06, 31 December 2017 (2017-12-31), pages 35 - 37 *
郭小辉 等: "电容-电阻双模式材质识别传感器设计与实验", 华中科技大学学报(自然科学版), vol. 43, no. 1, 31 October 2015 (2015-10-31), pages 220 - 223 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796708A (en) * 2020-06-02 2020-10-20 南京信息工程大学 Method for reproducing three-dimensional shape characteristics of image on touch screen
CN111796708B (en) * 2020-06-02 2023-05-26 南京信息工程大学 Method for reproducing three-dimensional shape features of image on touch screen
CN111964821A (en) * 2020-08-05 2020-11-20 清华大学深圳国际研究生院 Pressure touch prediction method and pressure touch prediction model for electronic skin
WO2022111799A1 (en) * 2020-11-24 2022-06-02 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Method for force inference of a sensor arrangement, methods for training networks, force inference module and sensor arrangement
CN112802182A (en) * 2021-01-20 2021-05-14 同济大学 Anthropomorphic touch object reconstruction method and system based on touch sensor
CN112802182B (en) * 2021-01-20 2022-12-16 同济大学 Method and system for reconstructing anthropomorphic touch object based on touch sensor
CN114894354A (en) * 2022-04-11 2022-08-12 汕头大学 Pressure perception feedback device based on surface structure color and deep learning identification method
CN114894354B (en) * 2022-04-11 2023-06-13 汕头大学 Pressure sensing feedback device based on surface structural color and deep learning identification method

Also Published As

Publication number Publication date
CN111079333B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN111079333B (en) Deep learning sensing method of flexible touch sensor
CN110188598B (en) Real-time hand posture estimation method based on MobileNet-v2
Zhang et al. Fingervision tactile sensor design and slip detection using convolutional lstm network
Qian et al. PUGeo-Net: A geometry-centric network for 3D point cloud upsampling
CN101819462B (en) Image texture haptic representation system based on force/haptic interaction equipment
Suresh et al. Shapemap 3-d: Efficient shape mapping through dense touch and vision
CN101615072A (en) Based on method for reproducing texture force touch from the shading shape technology
CN108594660B (en) Working modal parameter identification method and system of time invariant structure
Kuppuswamy et al. Fast model-based contact patch and pose estimation for highly deformable dense-geometry tactile sensors
CN111204476A (en) Vision-touch fusion fine operation method based on reinforcement learning
Li et al. Assemblies of microfluidic channels and micropillars facilitate sensitive and compliant tactile sensing
Seminara et al. Tactile data processing method for the reconstruction of contact force distributions
Lee et al. Predicting the force map of an ert-based tactile sensor using simulation and deep networks
Dai et al. Design of a biomimetic tactile sensor for material classification
Van der Merwe et al. Integrated object deformation and contact patch estimation from visuo-tactile feedback
Soter et al. Shape reconstruction of CCD camera-based soft tactile sensors
CN113947119A (en) Method for detecting human gait by using plantar pressure signals
Du et al. 3D contact point cloud reconstruction from vision-based tactile flow
Luo et al. Contact and deformation modeling for interactive environments
Wang et al. Tactile sensory response prediction and design using virtual tests
Wang et al. A novel vision-based tactile sensor using particle image velocimetry for multi-modal object detection and force sensing
CN116029205A (en) Flow field reconstruction method based on intrinsic orthogonal decomposition and deep learning fusion
Wang et al. Elastic interaction of particles for robotic tactile simulation
Rasoulzadeh et al. Linking early design stages with physical simulations using machine learning
Pinto-Salamanca et al. An estimation of triaxial forces from normal stress tactile sensor arrays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant